https://wiki.archlinux.org/api.php?action=feedcontributions&user=Nickj&feedformat=atomArchWiki - User contributions [en]2024-03-28T21:05:30ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=115214Convert a single drive system to RAID2010-08-25T02:06:43Z<p>Nickj: /* Verify that email alerts are working */</p>
<hr />
<div>[[Category:File systems (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|telinit 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create new RAID array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
telinit 1<br />
<br />
To see the current partitions:<br />
fdisk -l<br />
<br />
To partition the new disk<br />
fdisk /dev/sdb<br />
<br />
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
c # Turn off DOS compatibility (optional).<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second partition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
a # Toggle the bootable flag to be "on"<br />
2 # for partition number 2.<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices. We are going to fill this missing device later.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make file systems ===<br />
Use the file system of your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap -L NEW-SWAP /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=NEW-SWAP, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the data ===<br />
[root@arch ~]# rsync -avxHAXS --delete --progress / /mnt/new-raid<br />
<br />
Note that by using the '''-x''' option you are limiting rsync to a single file system. If you have a more traditional file system layout, with different partitions for /boot, /home, and perhaps others, you will need to rsync those file systems separately. For example:<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /boot /mnt/new-raid/boot<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /home /mnt/new-raid/home<br />
<br />
Alternatively, you can use tar instead of the above rsync command if you prefer. rsync will, however, be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0 md=0,/dev/sda2,/dev/sdb2<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro md=0,/dev/sda2,/dev/sdb2<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root /]# <br />
<br />
If the chroot command gives you an error like <code>chroot: failed to run command `/bin/zsh': No such file or directory</code>, then use <code>chroot /mnt/new-raid/ /bin/bash</code> instead.<br />
<br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, regardless of the Linux variant).<br />
<br />
==== For Arch Linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|mdadm}} in the HOOKS array. Place it after {{Codeline|autodetect}}, {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root /]# mkinitcpio -g /boot/kernel26.img<br />
[root /]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on boot-up. If you don't do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the two above changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
This will rebuild your running version - to rebuild others, this will show a listing (in Ubuntu):<br />
ls /boot/ | perl -lne "/^[A-z\.\-]+/m && print $'" | egrep -e 'openvz$|generic$|server$' | sort -u<br />
Then substitute/script in these others so that all are available for use with the new RAID setup.<br />
<br />
=== Install GRUB on the RAID array ===<br />
Start grub:<br />
<br />
[root@arch ~]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify success ==<br />
Reboot your computer, making sure it boots from the new RAID disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. Also check mdstat again only to confirm which disk is in the array.<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add original disk to array ==<br />
<br />
=== Partition original disk ===<br />
<br />
Copy the partition table from /dev/sdb (newly implemented RAID disk) to /dev/sda (second disk we are adding to the array) so that both disks have exactly the same layout.<br />
[root@arch ~]# sfdisk -d /dev/sdb | sfdisk /dev/sda<br />
<br />
Alternate method - this will output the /dev/sdb partition layout to a file, then it's used as input for partitioning /dev/sda.<br />
[root@arch ~]# sfdisk -d /dev/sdb > raidinfo-partitions.sdb<br />
[root@arch ~]# sfdisk /dev/sda < raidinfo-partitions.sdb<br />
Use the --force if needed.<br />
[root@arch ~]# sfdisk --force /dev/sda < raidinfo-partitions.sdb<br />
<br />
Verify that the partitioning is identical:<br />
[root@arch ~]# fdisk -l<br />
<br />
==== Note ====<br />
If you get an error when attempting to add the parition to the array: <br />
mdadm: /dev/sda1 not large enough to join array<br />
You might have seen an earlier warning message when partitioning this disk that the kernel<br />
still sees the old disk size - a reboot ought to fix this, then try adding again to the array.<br />
<br />
=== Add disk partition to array ===<br />
[root@arch ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none><br />
<br />
=== Add second swap partition ===<br />
The partition was created with sfdisk, but it still has to be formatted for swap.<br />
[root@arch ~]# mkswap -L SWAP /dev/sda1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=SWAP, UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9<br />
<br />
Then add this UUID to the fstab exactly like the other one earlier. When done, it should look similar to this:<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9 none swap sw 0 0<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<br />
It can be activated immediately:<br />
[root@arch ~]# swapon /dev/sda1<br />
<br />
=== Verify that email alerts are working ===<br />
<br />
If you run this command, then you should get a notification email showing the contents of /proc/mdstat :<br />
[root@arch ~]# mdadm --monitor --test --oneshot /dev/md0<br />
Check that you get the test email notification. This way you can be aware if one of the disks in the array fails (otherwise it may fail silently, putting you at risk of data loss if another drive should also fail). If you don't get an email notification, check the "MAILADDR" line in mdadm.conf, and also check that sending an email to this address from the command line (e.g. using the "mail" command) works.</div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=115213Convert a single drive system to RAID2010-08-25T02:06:34Z<p>Nickj: /* Verify that email alerts are working */ add improvement</p>
<hr />
<div>[[Category:File systems (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|telinit 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create new RAID array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
telinit 1<br />
<br />
To see the current partitions:<br />
fdisk -l<br />
<br />
To partition the new disk<br />
fdisk /dev/sdb<br />
<br />
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
c # Turn off DOS compatibility (optional).<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second partition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
a # Toggle the bootable flag to be "on"<br />
2 # for partition number 2.<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices. We are going to fill this missing device later.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make file systems ===<br />
Use the file system of your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap -L NEW-SWAP /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=NEW-SWAP, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the data ===<br />
[root@arch ~]# rsync -avxHAXS --delete --progress / /mnt/new-raid<br />
<br />
Note that by using the '''-x''' option you are limiting rsync to a single file system. If you have a more traditional file system layout, with different partitions for /boot, /home, and perhaps others, you will need to rsync those file systems separately. For example:<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /boot /mnt/new-raid/boot<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /home /mnt/new-raid/home<br />
<br />
Alternatively, you can use tar instead of the above rsync command if you prefer. rsync will, however, be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0 md=0,/dev/sda2,/dev/sdb2<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro md=0,/dev/sda2,/dev/sdb2<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root /]# <br />
<br />
If the chroot command gives you an error like <code>chroot: failed to run command `/bin/zsh': No such file or directory</code>, then use <code>chroot /mnt/new-raid/ /bin/bash</code> instead.<br />
<br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, regardless of the Linux variant).<br />
<br />
==== For Arch Linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|mdadm}} in the HOOKS array. Place it after {{Codeline|autodetect}}, {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root /]# mkinitcpio -g /boot/kernel26.img<br />
[root /]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on boot-up. If you don't do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the two above changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
This will rebuild your running version - to rebuild others, this will show a listing (in Ubuntu):<br />
ls /boot/ | perl -lne "/^[A-z\.\-]+/m && print $'" | egrep -e 'openvz$|generic$|server$' | sort -u<br />
Then substitute/script in these others so that all are available for use with the new RAID setup.<br />
<br />
=== Install GRUB on the RAID array ===<br />
Start grub:<br />
<br />
[root@arch ~]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify success ==<br />
Reboot your computer, making sure it boots from the new RAID disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. Also check mdstat again only to confirm which disk is in the array.<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add original disk to array ==<br />
<br />
=== Partition original disk ===<br />
<br />
Copy the partition table from /dev/sdb (newly implemented RAID disk) to /dev/sda (second disk we are adding to the array) so that both disks have exactly the same layout.<br />
[root@arch ~]# sfdisk -d /dev/sdb | sfdisk /dev/sda<br />
<br />
Alternate method - this will output the /dev/sdb partition layout to a file, then it's used as input for partitioning /dev/sda.<br />
[root@arch ~]# sfdisk -d /dev/sdb > raidinfo-partitions.sdb<br />
[root@arch ~]# sfdisk /dev/sda < raidinfo-partitions.sdb<br />
Use the --force if needed.<br />
[root@arch ~]# sfdisk --force /dev/sda < raidinfo-partitions.sdb<br />
<br />
Verify that the partitioning is identical:<br />
[root@arch ~]# fdisk -l<br />
<br />
==== Note ====<br />
If you get an error when attempting to add the parition to the array: <br />
mdadm: /dev/sda1 not large enough to join array<br />
You might have seen an earlier warning message when partitioning this disk that the kernel<br />
still sees the old disk size - a reboot ought to fix this, then try adding again to the array.<br />
<br />
=== Add disk partition to array ===<br />
[root@arch ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none><br />
<br />
=== Add second swap partition ===<br />
The partition was created with sfdisk, but it still has to be formatted for swap.<br />
[root@arch ~]# mkswap -L SWAP /dev/sda1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=SWAP, UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9<br />
<br />
Then add this UUID to the fstab exactly like the other one earlier. When done, it should look similar to this:<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9 none swap sw 0 0<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<br />
It can be activated immediately:<br />
[root@arch ~]# swapon /dev/sda1<br />
<br />
=== Verify that email alerts are working ===<br />
<br />
If you run this command, then you should get an notification email showing the contents of /proc/mdstat :<br />
[root@arch ~]# mdadm --monitor --test --oneshot /dev/md0<br />
Check that you get the test email notification. This way you can be aware if one of the disks in the array fails (otherwise it may fail silently, putting you at risk of data loss if another drive should also fail). If you don't get an email notification, check the "MAILADDR" line in mdadm.conf, and also check that sending an email to this address from the command line (e.g. using the "mail" command) works.</div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=115212Convert a single drive system to RAID2010-08-25T02:00:43Z<p>Nickj: /* Add second swap partition */ add section on testing email notifications.</p>
<hr />
<div>[[Category:File systems (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|telinit 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create new RAID array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
telinit 1<br />
<br />
To see the current partitions:<br />
fdisk -l<br />
<br />
To partition the new disk<br />
fdisk /dev/sdb<br />
<br />
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
c # Turn off DOS compatibility (optional).<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second partition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
a # Toggle the bootable flag to be "on"<br />
2 # for partition number 2.<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices. We are going to fill this missing device later.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make file systems ===<br />
Use the file system of your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap -L NEW-SWAP /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=NEW-SWAP, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the data ===<br />
[root@arch ~]# rsync -avxHAXS --delete --progress / /mnt/new-raid<br />
<br />
Note that by using the '''-x''' option you are limiting rsync to a single file system. If you have a more traditional file system layout, with different partitions for /boot, /home, and perhaps others, you will need to rsync those file systems separately. For example:<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /boot /mnt/new-raid/boot<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /home /mnt/new-raid/home<br />
<br />
Alternatively, you can use tar instead of the above rsync command if you prefer. rsync will, however, be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0 md=0,/dev/sda2,/dev/sdb2<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro md=0,/dev/sda2,/dev/sdb2<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root /]# <br />
<br />
If the chroot command gives you an error like <code>chroot: failed to run command `/bin/zsh': No such file or directory</code>, then use <code>chroot /mnt/new-raid/ /bin/bash</code> instead.<br />
<br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, regardless of the Linux variant).<br />
<br />
==== For Arch Linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|mdadm}} in the HOOKS array. Place it after {{Codeline|autodetect}}, {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root /]# mkinitcpio -g /boot/kernel26.img<br />
[root /]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on boot-up. If you don't do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the two above changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
This will rebuild your running version - to rebuild others, this will show a listing (in Ubuntu):<br />
ls /boot/ | perl -lne "/^[A-z\.\-]+/m && print $'" | egrep -e 'openvz$|generic$|server$' | sort -u<br />
Then substitute/script in these others so that all are available for use with the new RAID setup.<br />
<br />
=== Install GRUB on the RAID array ===<br />
Start grub:<br />
<br />
[root@arch ~]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify success ==<br />
Reboot your computer, making sure it boots from the new RAID disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. Also check mdstat again only to confirm which disk is in the array.<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add original disk to array ==<br />
<br />
=== Partition original disk ===<br />
<br />
Copy the partition table from /dev/sdb (newly implemented RAID disk) to /dev/sda (second disk we are adding to the array) so that both disks have exactly the same layout.<br />
[root@arch ~]# sfdisk -d /dev/sdb | sfdisk /dev/sda<br />
<br />
Alternate method - this will output the /dev/sdb partition layout to a file, then it's used as input for partitioning /dev/sda.<br />
[root@arch ~]# sfdisk -d /dev/sdb > raidinfo-partitions.sdb<br />
[root@arch ~]# sfdisk /dev/sda < raidinfo-partitions.sdb<br />
Use the --force if needed.<br />
[root@arch ~]# sfdisk --force /dev/sda < raidinfo-partitions.sdb<br />
<br />
Verify that the partitioning is identical:<br />
[root@arch ~]# fdisk -l<br />
<br />
==== Note ====<br />
If you get an error when attempting to add the parition to the array: <br />
mdadm: /dev/sda1 not large enough to join array<br />
You might have seen an earlier warning message when partitioning this disk that the kernel<br />
still sees the old disk size - a reboot ought to fix this, then try adding again to the array.<br />
<br />
=== Add disk partition to array ===<br />
[root@arch ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none><br />
<br />
=== Add second swap partition ===<br />
The partition was created with sfdisk, but it still has to be formatted for swap.<br />
[root@arch ~]# mkswap -L SWAP /dev/sda1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=SWAP, UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9<br />
<br />
Then add this UUID to the fstab exactly like the other one earlier. When done, it should look similar to this:<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9 none swap sw 0 0<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<br />
It can be activated immediately:<br />
[root@arch ~]# swapon /dev/sda1<br />
<br />
=== Verify that email alerts are working ===<br />
<br />
If you run this command, then you should get an notification email showing the contents of /proc/mdstat :<br />
[root@arch ~]# mdadm --monitor --test /dev/md0<br />
You will probably have to press ctrl-c after about 5 or 10 seconds. Check that you get the test email notification. This way you can be aware if one of the disks in the array fails (otherwise it may fail silently, putting you at risk of data loss if another drive should also fail). If you don't get an email notification, check that the "MAILADDR" line is configured in mdadm.conf, and also check that sending emails from the command line (e.g. using the "mail" command) works.</div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=111863Convert a single drive system to RAID2010-07-13T08:25:19Z<p>Nickj: /* Rebuild initcpio or initramfs */ add shell path to chroot command (needed for some bootable disks, like the "System Rescue CD").</p>
<hr />
<div>[[Category:File systems (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|telinit 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create new RAID array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
telinit 1<br />
<br />
To see the current partitions:<br />
fdisk -l<br />
<br />
To partition the new disk<br />
fdisk /dev/sdb<br />
<br />
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
c # Turn off DOS compatibility (optional).<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second partition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
a # Toggle the bootable flag to be "on"<br />
2 # for partition number 2.<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices. We are going to fill this missing device later.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make file systems ===<br />
Use the file system of your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap -L NEW-SWAP /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=NEW-SWAP, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the data ===<br />
[root@arch ~]# rsync -avxHAXS --delete --progress / /mnt/new-raid<br />
<br />
Note that by using the '''-x''' option you are limiting rsync to a single file system. If you have a more traditional file system layout, with different partitions for /boot, /home, and perhaps others, you will need to rsync those file systems separately. For example:<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /boot /mnt/new-raid/boot<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /home /mnt/new-raid/home<br />
<br />
Alternatively, you can use tar instead of the above rsync command if you prefer. rsync will, however, be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0 md=0,/dev/sda2,/dev/sdb2<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro md=0,/dev/sda2,/dev/sdb2<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root /]# <br />
<br />
If the chroot command gives you an error like <code>chroot: failed to run command `/bin/zsh': No such file or directory</code>, then use <code>chroot /mnt/new-raid/ /bin/bash</code> instead.<br />
<br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, regardless of the Linux variant).<br />
<br />
==== For Arch Linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|mdadm}} in the HOOKS array. Place it after {{Codeline|autodetect}}, {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root /]# mkinitcpio -g /boot/kernel26.img<br />
[root /]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on boot-up. If you don't do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the two above changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
This will rebuild your running version - to rebuild others, this will show a listing (in Ubuntu):<br />
ls /boot/ | perl -lne "/^[A-z\.\-]+/m && print $'" | egrep -e 'openvz$|generic$|server$' | sort -u<br />
Then substitute/script in these others so that all are available for use with the new RAID setup.<br />
<br />
=== Install GRUB on the RAID array ===<br />
Start grub:<br />
<br />
[root@arch ~]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify success ==<br />
Reboot your computer, making sure it boots from the new RAID disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. Also check mdstat again only to confirm which disk is in the array.<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add original disk to array ==<br />
<br />
=== Partition original disk ===<br />
<br />
Copy the partition table from /dev/sdb (newly implemented RAID disk) to /dev/sda (second disk we are adding to the array) so that both disks have exactly the same layout.<br />
[root@arch ~]# sfdisk -d /dev/sdb | sfdisk /dev/sda<br />
<br />
Alternate method - this will output the /dev/sdb partition layout to a file, then it's used as input for partitioning /dev/sda.<br />
[root@arch ~]# sfdisk -d /dev/sdb > raidinfo-partitions.sdb<br />
[root@arch ~]# sfdisk /dev/sda < raidinfo-partitions.sdb<br />
Use the --force if needed.<br />
[root@arch ~]# sfdisk --force /dev/sda < raidinfo-partitions.sdb<br />
<br />
Verify that the partitioning is identical:<br />
[root@arch ~]# fdisk -l<br />
<br />
==== Note ====<br />
If you get an error when attempting to add the parition to the array: <br />
mdadm: /dev/sda1 not large enough to join array<br />
You might have seen an earlier warning message when partitioning this disk that the kernel<br />
still sees the old disk size - a reboot ought to fix this, then try adding again to the array.<br />
<br />
=== Add disk partition to array ===<br />
[root@arch ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none><br />
<br />
=== Add second swap partition ===<br />
The partition was created with sfdisk, but it still has to be formatted for swap.<br />
[root@arch ~]# mkswap -L SWAP /dev/sda1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=SWAP, UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9<br />
<br />
Then add this UUID to the fstab exactly like the other one earlier. When done, it should look similar to this:<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9 none swap sw 0 0<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<br />
It can be activated immediately:<br />
[root@arch ~]# swapon /dev/sda1</div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=111862Convert a single drive system to RAID2010-07-13T08:21:33Z<p>Nickj: /* Partition the Disk */ add bootable flag + remove old DOS compatibility</p>
<hr />
<div>[[Category:File systems (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|telinit 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create new RAID array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
telinit 1<br />
<br />
To see the current partitions:<br />
fdisk -l<br />
<br />
To partition the new disk<br />
fdisk /dev/sdb<br />
<br />
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
c # Turn off DOS compatibility (optional).<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second partition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
a # Toggle the bootable flag to be "on"<br />
2 # for partition number 2.<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices. We are going to fill this missing device later.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make file systems ===<br />
Use the file system of your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap -L NEW-SWAP /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=NEW-SWAP, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the data ===<br />
[root@arch ~]# rsync -avxHAXS --delete --progress / /mnt/new-raid<br />
<br />
Note that by using the '''-x''' option you are limiting rsync to a single file system. If you have a more traditional file system layout, with different partitions for /boot, /home, and perhaps others, you will need to rsync those file systems separately. For example:<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /boot /mnt/new-raid/boot<br />
[root@arch ~]# rsync -avxHAXS --delete --progress /home /mnt/new-raid/home<br />
<br />
Alternatively, you can use tar instead of the above rsync command if you prefer. rsync will, however, be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0 md=0,/dev/sda2,/dev/sdb2<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro md=0,/dev/sda2,/dev/sdb2<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root /]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, regardless of the Linux variant).<br />
<br />
==== For Arch Linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|mdadm}} in the HOOKS array. Place it after {{Codeline|autodetect}}, {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root /]# mkinitcpio -g /boot/kernel26.img<br />
[root /]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on boot-up. If you don't do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the two above changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
This will rebuild your running version - to rebuild others, this will show a listing (in Ubuntu):<br />
ls /boot/ | perl -lne "/^[A-z\.\-]+/m && print $'" | egrep -e 'openvz$|generic$|server$' | sort -u<br />
Then substitute/script in these others so that all are available for use with the new RAID setup.<br />
<br />
=== Install GRUB on the RAID array ===<br />
Start grub:<br />
<br />
[root@arch ~]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify success ==<br />
Reboot your computer, making sure it boots from the new RAID disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. Also check mdstat again only to confirm which disk is in the array.<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add original disk to array ==<br />
<br />
=== Partition original disk ===<br />
<br />
Copy the partition table from /dev/sdb (newly implemented RAID disk) to /dev/sda (second disk we are adding to the array) so that both disks have exactly the same layout.<br />
[root@arch ~]# sfdisk -d /dev/sdb | sfdisk /dev/sda<br />
<br />
Alternate method - this will output the /dev/sdb partition layout to a file, then it's used as input for partitioning /dev/sda.<br />
[root@arch ~]# sfdisk -d /dev/sdb > raidinfo-partitions.sdb<br />
[root@arch ~]# sfdisk /dev/sda < raidinfo-partitions.sdb<br />
Use the --force if needed.<br />
[root@arch ~]# sfdisk --force /dev/sda < raidinfo-partitions.sdb<br />
<br />
Verify that the partitioning is identical:<br />
[root@arch ~]# fdisk -l<br />
<br />
==== Note ====<br />
If you get an error when attempting to add the parition to the array: <br />
mdadm: /dev/sda1 not large enough to join array<br />
You might have seen an earlier warning message when partitioning this disk that the kernel<br />
still sees the old disk size - a reboot ought to fix this, then try adding again to the array.<br />
<br />
=== Add disk partition to array ===<br />
[root@arch ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none><br />
<br />
=== Add second swap partition ===<br />
The partition was created with sfdisk, but it still has to be formatted for swap.<br />
[root@arch ~]# mkswap -L SWAP /dev/sda1<br />
Setting up swapspace version 1, size = 271314 kB<br />
LABEL=SWAP, UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9<br />
<br />
Then add this UUID to the fstab exactly like the other one earlier. When done, it should look similar to this:<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=1acd55dc-f73f-4639-94bc-3f30c33710c9 none swap sw 0 0<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<br />
It can be activated immediately:<br />
[root@arch ~]# swapon /dev/sda1</div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67451Convert a single drive system to RAID2009-04-23T22:47:22Z<p>Nickj: /* Make File Systems */ have just retested this on another box, and the inode parameter is not required, so removing this.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk /dev/sdb<br />
<br />
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1 -L NEW-SWAP<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup. If you don't do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67439Convert a single drive system to RAID2009-04-23T11:54:37Z<p>Nickj: /* Install GRUB on the RAID Array */ correct what example output will look like.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk /dev/sdb<br />
<br />
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1 -L NEW-SWAP<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup. If you don't do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd0) (hd0)1+16 p (hd0,1)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67390Convert a single drive system to RAID2009-04-22T10:55:35Z<p>Nickj: /* For Ubuntu or Debian: Rebuild initramfs */</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk /dev/sdb<br />
<br />
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1 -L NEW-SWAP<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup. If you don't do this, you can get an "ALERT! /dev/md0 does not exist" error when booting :<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67389Convert a single drive system to RAID2009-04-22T10:48:40Z<p>Nickj: /* Partition the Disk */</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk /dev/sdb<br />
<br />
Then the fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1 -L NEW-SWAP<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup:<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Talk:Convert_a_single_drive_system_to_RAID&diff=67388Talk:Convert a single drive system to RAID2009-04-22T10:46:43Z<p>Nickj: /* Updated docs */ new section</p>
<hr />
<div><br />
Hi.<br />
<br />
Is there a way to avoid double copy ?<br />
<br />
I mean, we create a new filesystem and copy all the data to it, then data will be mirrored back to the first drive. Is there a way to avoid one copy, for instance to build a RAID-1 group out of an existing disk. This would be quite faster, wouldn't it ?<br />
<br />
As far as I know there is a raid specific block at the end of the disk/partition, maybe we could add it without erasing the rest of the disk ? Probably the existing filesystem has to be reduced no to overwrite later this specific block but for me it sounds possible...<br />
<br />
== Updated docs ==<br />
<br />
Hi, I recently upgraded an Ubuntu based system from a single rive to RAID-1, and I followed the steps on this page because it was the clearest and best step-by-step documentation that I found on whole net for doing that. I have added some updates because there were a some bits that were unclear to me (e.g. how to use fdisk, had to check the docs for this), and a few problems that I ran into (e.g. the grub setup assumed that the failed disk was still present and that the BIOS settings would not have to be changed, which for me at least was not the case).<br />
<br />
These are the steps I applied to my system, and they seemed to work well. However, because I'm using Ubuntu, there were a few bits that were distro-specific, and where these apply I have tried to make these obvious. I'm hoping these parts will still be acceptable on this wiki, because although they are not Arch-linux-specific, the vast majority of the steps (around 95%) are universally applicable to Linux systems, and so it seems the best to have one page with steps, that indicates where differences exist. If this is a problem then please [[Special:Emailuser/Nickj|send me a message to let me know]], and I'll maintain a copy of this page (still under GFDL of course) with the Ubuntu-specific bits on my separate personal wiki. -- All the best, [[User:Nickj|Nickj]] 06:46, 22 April 2009 (EDT)</div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67384Convert a single drive system to RAID2009-04-22T09:07:32Z<p>Nickj: /* Update GRUB */ Some further updates that make kernetl security updates a lot easier (at least on Debian & Ubuntu), Arch may be similar. These two fields are used when menu.lst is regenerated.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1 -L NEW-SWAP<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
Also update the "kopt" and "groot" sections, as shown below, if they are in your {{Filename|/mnt/new-raid/boot/grub/menu.lst}} file, because it will make applying distribution kernel updates easier:<br />
- # kopt=root=UUID=fbafab1a-18f5-4bb9-9e66-a71c1b00977e ro<br />
+ # kopt=root=/dev/md0 ro<br />
<br />
## default grub root device<br />
## e.g. groot=(hd0,0)<br />
- # groot=(hd0,0)<br />
+ # groot=(hd0,1)<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup:<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67383Convert a single drive system to RAID2009-04-22T08:56:38Z<p>Nickj: /* Alter fstab */</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1 -L NEW-SWAP<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, as shown below. Of course we also add /dev/md0, as our root mount point.<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup:<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67382Convert a single drive system to RAID2009-04-22T08:55:00Z<p>Nickj: /* Alter fstab */ Add explanation about UUIDs, and how to find them.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1 -L NEW-SWAP<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices. It's better to use UUID codes here, which should not change, even if our partition detection order changes or a drive gets removed.<br />
<br />
To find the UUID to use:<br />
[root@arch ~]# blkid<br />
/dev/sda1: TYPE="swap" UUID="34656682b-34ad-8ed5-9233-dfab42272212" <br />
/dev/sdb1: UUID="9ff5682b-d5a1-4ed5-8d63-d1df911e0142" TYPE="swap" LABEL="NEW-SWAP" <br />
/dev/md0: UUID="6f2ea3d3-d7be-4c9d-adfa-dbeeedaf128e" SEC_TYPE="ext2" TYPE="ext3" LABEL="RAID-ONE" <br />
/dev/sda2: UUID="13dd2227-6592-403b-931a-7f3e14a23e1f" TYPE="ext2" <br />
/dev/sdb2: UUID="b28813e7-15fc-d4aa-dc8a-e2c1de641df1" TYPE="mdraid" <br />
<br />
Look for the partition labeled "NEW-SWAP", on /dev/sdb1, that we created above. Copy your swap partition's UUID into the new fstab, like so:<br />
<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
UUID=9ff5682b-d5a1-4ed5-8d63-d1df911e0142 none swap sw 0 0<br />
<!-- /dev/sdb1 swap swap defaults 0 0 --><br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup:<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67381Convert a single drive system to RAID2009-04-22T08:48:31Z<p>Nickj: /* Make File Systems */ add a label for the new swap partition</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1 -L NEW-SWAP<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup:<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67380Convert a single drive system to RAID2009-04-22T08:38:24Z<p>Nickj: /* Install GRUB on the RAID Array */</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup:<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67379Convert a single drive system to RAID2009-04-22T08:38:05Z<p>Nickj: /* For Ubuntu or Debian: Rebuild initramfs */ don't need sudo here, it's assumed this whole thing is done as root.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup:<br />
mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67378Convert a single drive system to RAID2009-04-22T08:37:02Z<p>Nickj: /* Rebuild initcpio */ Expand. Apologies for the non-arch specific stuff, but I have tried to make it as brief as possible. If arch has a /etc/mdadm/mdadm.conf then most of this step applies to it too</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio or initramfs ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Complete the appropriate section below for your distribution (almost every other step is identical, irrespective of the linux variant).<br />
<br />
==== For Arch linux: Rebuild initcpio ====<br />
<br />
Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
==== For Ubuntu or Debian: Rebuild initramfs ====<br />
<br />
sudo nano /etc/mdadm/mdadm.conf<br />
... and change the "MAILADDR" line to be your email address, if you want emailed alerts of problems with the RAID-1.<br />
<br />
Then save the array configuration with UUIDs to make it easier for the system to find /dev/md0 on bootup:<br />
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf <br />
<br />
Then rebuild initramfs, incorporating the above two changes:<br />
update-initramfs -k `uname -r` -c -t<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67377Convert a single drive system to RAID2009-04-22T08:22:43Z<p>Nickj: /* Install GRUB on the RAID Array */ this section assumes the first disk will be present, but what if the first disk fails & has been removed? Better to assume the first disk is dead & gone. Updating.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
=== Install GRUB on the RAID Array ===<br />
Start grub:<br />
<br />
[root@arch /]# grub --no-floppy<br />
<br />
Then we find our two partitions - the current one (hd0,0) (I.e. first disk, first partition), and (hd1,1) (i.e. the partition we just added above, on the second partition of the second drive). Check you get two results here:<br />
<br />
grub> find /boot/grub/stage1<br />
(hd0,0)<br />
(hd1,1)<br />
<br />
Then we tell grub to assume the new second drive is (hd0), i.e. the first disk in the system (when it is not currently the case). If your first disk fails, however, and you remove it, or you change the order disks are detected in the BIOS so that you can boot from your second disk, then your second disk will become the first disk in the system. The MBR will then be correct, your new second drive will have become your first drive, and you will be able to boot from this disk.<br />
<br />
grub> device (hd0) /dev/sdb<br />
<br />
Then we install GRUB onto the MBR of our new second drive. Check that the "partition type" is detected as "0xfd", as shown below, to make sure you have the right partition:<br />
<br />
grub> root (hd0,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
grub> setup (hd0)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub> quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67376Convert a single drive system to RAID2009-04-22T07:57:11Z<p>Nickj: /* Copy the Data */ rsync command will probably be better, especially if people have to repeat this a few times because they are learning and make some mistakes (like I did!)</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# rsync -avH --delete --progress -x / /mnt/new-raid<br />
<br />
Alternatively, you can use tar if you prefer instead of the above rsync command. However rsync will be quicker if you are only copying over changes. The tar command is: <code>tar -C / -clspf - . | tar -xlspvf -</code><br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
=== Install GRUB on the RAID Array ===<br />
[root@arch /]# grub<br />
grub> root (hd1,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
<br />
grub> setup (hd1)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub>quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67375Convert a single drive system to RAID2009-04-22T07:51:57Z<p>Nickj: /* Rebuild initcpio */ the second "chroot /mnt/new-raid/" is not required (and will generate an error)</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# tar -C / -clspf - . | tar -xlspvf -<br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# <br />
You are now chrooted in what will become the root of your RAID-1 system. Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
=== Install GRUB on the RAID Array ===<br />
[root@arch /]# grub<br />
grub> root (hd1,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
<br />
grub> setup (hd1)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub>quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67374Convert a single drive system to RAID2009-04-22T07:51:11Z<p>Nickj: /* Create the RAID Device */ I had to reboot to get /dev/sdb2 to appear after partitioning the disk, add note to this effect.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Note: if the above command causes mdadm to say "no such device /dev/sdb2", then reboot, and run the command again.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# tar -C / -clspf - . | tar -xlspvf -<br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# chroot /mnt/new-raid/<br />
You are now chrooted in what will become the root of your RAID-1 system. Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
=== Install GRUB on the RAID Array ===<br />
[root@arch /]# grub<br />
grub> root (hd1,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
<br />
grub> setup (hd1)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub>quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67373Convert a single drive system to RAID2009-04-22T07:48:35Z<p>Nickj: /* Make File Systems */ To explain this change: There is a bug with recent versions of grub where if the inode size is too large, it won't boot, so we set size 128 inodes. Also label disk.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -I 128 -j -L RAID-ONE /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# tar -C / -clspf - . | tar -xlspvf -<br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# chroot /mnt/new-raid/<br />
You are now chrooted in what will become the root of your RAID-1 system. Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
=== Install GRUB on the RAID Array ===<br />
[root@arch /]# grub<br />
grub> root (hd1,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
<br />
grub> setup (hd1)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub>quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67372Convert a single drive system to RAID2009-04-22T07:45:30Z<p>Nickj: /* Partition the Disk */ Enumerate the exact steps that users will have to follow the fdisk the disk, with an explanation of what each command does.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
Drop to single user mode:<br />
init 1<br />
<br />
To see the current partitions:<br />
fisk -l<br />
<br />
To partition the new disk<br />
fdisk<br />
<br />
Then some fdisk commands to partition the new disk. Note that everything after the "#" is an explanation of what the command is doing:<br />
n # new<br />
p # primary<br />
1 # first partition<br />
1 # start at first cylinder<br />
101 # end cylinder, 0.1% of the disk. Note: update this number as appropriate for your disk.<br />
n # new<br />
p # primary<br />
2 # second parition<br />
press enter # Uses the default start from the end of the first partition<br />
press enter # Uses the default of using all the remain space on the disk.<br />
t # set the partition type<br />
1 # for partition number 1<br />
82 # ... and set it to be swap<br />
t # set the partition type<br />
2 # for partition number 2 ...<br />
fd # ... and set it to be "linux raid auto"<br />
p # print what the partition table will look like<br />
w # now write all of the above changes to disk<br />
<br />
At the end of partitioning, your partitions should look something like this:<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -j /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# tar -C / -clspf - . | tar -xlspvf -<br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# chroot /mnt/new-raid/<br />
You are now chrooted in what will become the root of your RAID-1 system. Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
=== Install GRUB on the RAID Array ===<br />
[root@arch /]# grub<br />
grub> root (hd1,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
<br />
grub> setup (hd1)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub>quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickjhttps://wiki.archlinux.org/index.php?title=Convert_a_single_drive_system_to_RAID&diff=67371Convert a single drive system to RAID2009-04-22T07:33:36Z<p>Nickj: /* Assumptions */ reformat one link to be less ugly.</p>
<hr />
<div>[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
You already have a fully functional system setup on a single drive, but you would like to add some redundancy to the setup by using RAID-1 to mirror your data across 2 drives. This guide follows the following steps to make the required changes, without losing data.<br />
* Create a single-disk RAID-1 array with our new disk<br />
* Move all your data from the old-disk to the new RAID-1 array<br />
* Verify the data move was successful<br />
* Wipe the old disk and add it to the new RAID-1 array<br />
<br />
{{Warning | Make a backup first. Even though our aim is to convert to a RAID setup without losing data, there's no guarantees the process will be perfect, and there is a high risk of accidents happening.}}<br />
<br />
<br />
== Assumptions ==<br />
* I will assume for the sake of the guide that the disk currently in your system is {{Filename|/dev/sda}} and your new disk is {{Filename|/dev/sdb}}.<br />
* We will create the following configuration:<br />
** 1 x RAID-1 array for the file-system (using 2 x partitions, 1 on each disk)<br />
** 2 x Swap Partitions using 1 partition on each disk.<br />
The swap partitions will not be in a RAID array as having swap on RAID serves no purpose. Refer to [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-2.html#ss2.3 this article] for reasons why.<br />
<br />
* To minimize the risk of Data on Disk (DoD) changing in the middle of our changes, I suggest you drop to single user mode before you start by using the {{Codeline|init 1}} command.<br />
<br />
* You will need to be the root user for the entire process.<br />
<br />
== Create New RAID Array ==<br />
First we need to create a single-disk RAID array using the new disk.<br />
=== Partition the Disk ===<br />
Use fdisk or your partitioning program of choice to setup 2 primary partitions on your new disk. Make the swap partition half the size of the total swap you want (the other half will go on the other disk).<br />
<br />
[root@arch ~]# fdisk -l /dev/sdb<br />
Disk /dev/sdb: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sdb1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sdb2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
Make sure your your partition types are set correctly. {{Codeline|"Linux Swap"}} is type {{Codeline|82}} and {{Codeline|"Linux raid autodetect"}} is type {{Codeline|FD}}.<br />
<br />
=== Create the RAID Device ===<br />
Next, create the single-disk RAID-1 array. Note the {{Codeline|"missing"}} keyword is specified as one of our devices.<br />
[root@arch ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2<br />
mdadm: array /dev/md0 started.<br />
<br />
Make sure the array has been created correctly by checking {{Filename|/proc/mdstat}}:<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sdb2[1]<br />
40064 blocks [2/1] [_U]<br />
<br />
unused devices: <none><br />
<br />
The devices are intact, however in a degraded state. (Because it's missing half the array!)<br />
<br />
=== Make File Systems ===<br />
Use whatever filesystem is your preference here. I'll use ext3 for this guide.<br />
[root@arch ~]# mkfs -t ext3 -j /dev/md0<br />
mke2fs 1.38 (30-Jun-2005)<br />
Filesystem label=<br />
OS type: Linux<br />
Block size=4096 (log=2)<br />
Fragment size=4096 (log=2)<br />
10027008 inodes, 20027008 blocks<br />
1001350 blocks (5.00%) reserved for the super user<br />
First data block=0<br />
612 block groups<br />
32768 blocks per group, 32768 fragments per group<br />
16384 inodes per group<br />
Superblock backups stored on blocks:<br />
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,<br />
4096000, 7962624, 11239424<br />
<br />
Writing inode tables: done<br />
Creating journal (32768 blocks): done<br />
Writing superblocks and filesystem accounting information: done<br />
<br />
This filesystem will be automatically checked every 25 mounts or<br />
180 days, whichever comes first. Use tune2fs -c or -i to override.<br />
<br />
Make a file system on the swap partition:<br />
[root@arch ~]# mkswap /dev/sdb1<br />
Setting up swapspace version 1, size = 271314 kB<br />
no label, UUID=9d746813-2d6b-4706-a56a-ecfd108f3fe9<br />
<br />
== Copy Data ==<br />
The new RAID-1 array is ready to start accepting data! So now we need to mount the array, and copy everything from the old system to the new system<br />
<br />
=== Mount the Array ===<br />
[root@arch ~]# mkdir /mnt/new-raid<br />
[root@arch ~]# mount /dev/md0 /mnt/new-raid<br />
<br />
=== Copy the Data ===<br />
[root@arch ~]# cd /mnt/new-raid<br />
[root@arch mnt]# tar -C / -clspf - . | tar -xlspvf -<br />
<br />
=== Update GRUB ===<br />
Use your preferred text editor to open {{Filename|/mnt/new-raid/boot/grub/menu.lst}}.<br />
<br />
--- SNIP ---<br />
default 0<br />
color light-blue/black light-cyan/blue<br />
<br />
## fallback<br />
fallback 1<br />
<br />
# (0) Arch Linux<br />
title Arch Linux - Original Disc<br />
root (hd0,0)<br />
kernel /vmlinuz26 root=/dev/sda1<br />
<br />
# (1) Arch Linux<br />
title Arch Linux - New RAID<br />
root (hd1,0)<br />
#kernel /vmlinuz26 root=/dev/sda1 ro<br />
kernel /vmlinuz26 root=/dev/md0<br />
--- SNIP ---<br />
Notice we added the {{Codeline|fallback}} line and duplicated the Arch Linux entry with a different {{Codeline|root}} directive on the kernel line.<br />
<br />
=== Alter fstab ===<br />
You need to tell fstab on the '''new''' disk where to find the new devices<br />
[root@arch ~]# cat /mnt/new-raid/etc/fstab<br />
/dev/md0 / ext3 defaults 0 1<br />
/dev/sdb1 swap swap defaults 0 0<br />
<br />
=== Rebuild initcpio ===<br />
[root@arch ~]# mount --bind /sys /mnt/new-raid/sys<br />
[root@arch ~]# mount --bind /proc /mnt/new-raid/proc<br />
[root@arch ~]# mount --bind /dev /mnt/new-raid/dev<br />
[root@arch ~]# chroot /mnt/new-raid/<br />
[root ~]# chroot /mnt/new-raid/<br />
You are now chrooted in what will become the root of your RAID-1 system. Edit {{Filename|/etc/mkinitcpio.conf}} to include {{Codeline|raid}} in the HOOKS array. Place it before {{Codeline|autodetect}} but after {{Codeline|sata}}, {{Codeline|scsi}} and {{Codeline|pata}} (whichever is appropriate for your hardware).<br />
[root ~]# mkinitcpio -g /boot/kernel26.img<br />
[root ~]# exit<br />
<br />
=== Install GRUB on the RAID Array ===<br />
[root@arch /]# grub<br />
grub> root (hd1,1)<br />
Filesystem type is ext2fs, partition type 0xfd<br />
<br />
grub> setup (hd1)<br />
Checking if "/boot/grub/stage1" exists... yes<br />
Checking if "/boot/grub/stage2" exists... yes<br />
Checking if "/boot/grub/e2fs_stage1_5" exists... yes<br />
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded. succeeded<br />
Running "install /boot/grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded<br />
Done<br />
grub>quit<br />
<br />
== Verify Success ==<br />
Reboot your computer, making sure it boots from the new raid disk ({{Filename|/dev/sdb}}) and not the original disk ({{Filename|/dev/sda}}). You may need to change the boot device priorities in your BIOS to do this.<br />
<br />
Once the GRUB on the '''new''' disk loads, make sure you select to boot the new entry you created in {{Filename|menu.lst}} earlier.<br />
<br />
Verify you have booted from the RAID array by looking at the output of mount. you should have a line similar to the following in the output:<br />
[root@arch ~]# mount<br />
/dev/md0 on / type ext3 (rw)<br />
Also {{Codeline|swapon -s}}:<br />
[root@arch ~]# swapon -s<br />
Filename Type Size Used Priority<br />
/dev/sdb1 partition 4000144 16 -1<br />
Note it is the swap partition on {{Filename|sdb}} that is in use, nothing from {{Filename|sda}}.<br />
<br />
If system boots fine, and the output of the above commands is correct, then congratulations! You're now running off the degraded RAID array. We can add the original disk to the array now to bring it up to full performance.<br />
<br />
== Add Original Disk to Array ==<br />
<br />
=== Partition Original Disk ===<br />
Take the output of {{Codeline|fdisk -l}} on your new disk, and make the partitions on your original disk look the same.<br />
[root@arch ~]# fdisk -l /dev/sda<br />
Disk /dev/sda: 80.0 GB, 80025280000 bytes<br />
255 heads, 63 sectors/track, 9729 cylinders<br />
Units = cylinders of 16065 * 512 = 8225280 bytes<br />
Disk identifier: 0x00000000<br />
<br />
Device Boot Start End Blocks Id System<br />
/dev/sda1 1 66 530113+ 82 Linux swap / Solaris<br />
/dev/sda2 67 9729 77618047+ fd Linux raid autodetect<br />
<br />
=== Add Disk to Array ===<br />
[root@svn ~]# mdadm /dev/md0 -a /dev/sda2<br />
mdadm: hot added /dev/sda2<br />
<br />
Verify that the RAID array is being rebuilt.<br />
[root@arch ~]# cat /proc/mdstat<br />
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]<br />
md0 : active raid1 sda2[2] sdb2[1]<br />
80108032 blocks [2/1] [_U]<br />
[>....................] recovery = 1.2% (1002176/80108032) finish=42.0min speed=31318K/sec<br />
<br />
unused devices: <none></div>Nickj