Difference between revisions of "Talk:Software RAID and LVM"

From ArchWiki
Jump to: navigation, search
(re, closing)
Line 200: Line 200:
<br>-- [[User:Voidzero|Voidzero]] 16:18, 10 October 2011 (EDT)
<br>-- [[User:Voidzero|Voidzero]] 16:18, 10 October 2011 (EDT)
:The warning has been moved to [[RAID]]. Please add new discussions at the bottom of talk pages, thank you. -- [[User:Kynikos|Kynikos]] 07:12, 11 October 2011 (EDT)
:The warning has been moved to [[RAID]]. Please add new discussions at the bottom of talk pages, thank you. -- [[User:Kynikos|Kynikos]] 07:12, 11 October 2011 (EDT)
:: Looks good, thanks. -- [[User:Voidzero|Voidzero]]

Revision as of 14:43, 11 October 2011

This article is an updated version of the Installing with Software RAID or LVM article. That page has been redirected to this page. The most recent revision of the old article can be found here.

Performance of Swap Array

It was pointed out here: http://bbs.archlinux.org/viewtopic.php?p=121424#121495 that software RAIDing your swap is not useful, and even slows performance.

I added a title to the above comment by Jstech on 1 November 2005. Although, the above link is broken, it is supported by the Gentoo Wiki article. This is a reminder to make a note of it in the article. ~ Filam 22:09, 29 August 2011 (EDT)

bug in /etc/rc.sysinit??

I think there is a problem with /etc/rc.sysinit as it not loads the module for the device mapper. This is lvm specific. I just modified the file like this:

if [ "$USELVM" = "yes" -o "$USELVM" = "YES" ]; then if [ -f /etc/lvmtab -a -x /sbin/vgchange ]; then

    # Kernel 2.4.x, LVM1 groups
    stat_busy "Activating LVM1 groups"
    /sbin/vgchange -a y
  elif [ -x /sbin/lvm -a -d /sys/block ]; then
    # Kernel 2.6.x, LVM2 groups
    stat_busy "Activating LVM2 groups"
    /sbin/modprobe dm_mod # <<-----Here is my change
    /sbin/lvm vgscan --ignorelockingfailure
    /sbin/lvm vgchange --ignorelockingfailure -a y


i use the standard arch kernel

What about root on LVM ?

I can't figure out how to make it... mkinitrd variable LVM_ROOT= what to set here?? i am using grub and partition is like this:

  /dev/sda1 ext2 /boot
  /dev/sda2 LVM
        vg_name: linux
        lv_name:    system   /     ext3
                      home   /home ext3
                      swap   none  sw

I've tryed LVM_ROOT=/dev/linux/system, grub kernel ... root=/dev/mapper/linux-system or root=/dev/linux/system i have USELVM=YES and in initrd LVM is enabled...

lvm partitions was made with lvm2 - but i can activate and mount them manually where am i wrong??? - still blocked in bussybox with err - can't mount root - or so... or can't switch [don't remember] - lvm partitions are disabled - i have to enable and mount them manually

--Suw 06:24, 8 April 2006 (EDT)

The topic of how to stack LVM and RAID is covered in What is better LVM on RAID or RAID on LVM? on Server Fault. I added a link to the resource section earlier today. ~ Filam 22:12, 29 August 2011 (EDT)

Lousy guide

This has to be the worst, most crappy designed guide i have seen in all of my time with OpenSource, restructure this thing, and strip the old stuff for 7.1, it is VERY outdated. I must say that i would feel more confident just jumping head on out into raid on my own, than following this guid, get som structure in it. --Kbutcher5 13:23, 4 May 2007

I did an install today and used this guide as a guideline, next to my common sense and other documentation on the mighty internet. Looking at this guide from an abstract point of view one will find that the principles are still the same. Going into basic details, you'll also get the gist. But several specifics are inaccurate and the overall structure could be improved. The guidance goes flaky once you get to the "Install and Configure Arch" part and beyond. Because it is so outdated.
That's where I had to use outside sources and apply my own knowhow to get the system running.
Perhaps i'll edit some bits here and there to patch it up a little. I have a general idea of what I did to get it right. And i'll enter a new GRUB example, because the kernel with the mdadm hook can detect arrays, or get it right by reading /etc/mdadm.conf. --Ultraman 2:47, 3 May 2009 (CEST)
I added a signature to Kbutcher5's original comment and formatted Ultraman's post to reflect the fact that it was a response to Kbutcher5. More importantly, I wanted to note that it is important to contribute to the article. If we don't contribute it will remain outdated. Even if you don't have time to make numerous edits, at least leave a link to a better tutorial or guide in the Additional Resources setion. ~ Filam 22:22, 29 August 2011 (EDT)

rebuild from chroot to avoid getting dropped to ramfs after grub install

for some reason the initcpio was not giving me my raid volumes after i tried to boot into the new system as instructed by the article. this problem went away after i used the install cd to boot, loaded raid1, raid5, and dm_mod modules manually, assembled arrays, activated the volume group, mounted the partitions, chrooted into the new system and rebuilt kernel26 (and therefore the initcpio). notably, i needed to mount /sys in addition to /proc and /dev prior to chrooting in order to get this to work. i only mention this because it seems like this article and several others omit /sys as a source of device files when instructing users to chroot. assuming that grub desires some access to devices when installing, i am wondering why only /proc and /dev are sufficient in the example outlined in the article but not in my case?

NB- in the course of my troubleshooting i added definition of the raid1 array on which my / is located to the kernel line in menu.lst and i can say that this alone is insufficient to confer bootability to the installation as defined in the article. i have not tried my configuration without this (it shouldn't be necessary with the mdadm hook in mkinitcpio.conf)

--Poopship21 21:24, 28 June 2009 (EDT)

The "mdadm" approach doesn't work well with raid0

Note that loading the "mdadm" hook doesn't always work with raid0. I've found out the hard way that this wiki just isn't complete.

I had to load the "raid" hook instaid of the "mdadm" hook and had to load it before autodetect for it to work, like so:

HOOKS="base udev raid autodetect pata scsi sata filesystems"

I puzzled this together by combining the wiki together with a "how to set up RAID during installation" guide I found on the ARCH forums.

I had to do this in a chroot and only really used the setup for the packages.

--Thajan 13:07, 24 July 2009 (EDT)

I had a similar problem with RAID1. This page says that the "old style" requires kernel parameters, but nowadays it doesn't. As of 2010.05, I still need a kernel line like

kernel /vmlinuz26 root=/dev/md3 ro md=1,/dev/sda1,/dev/sdb1 md=3,/dev/sda3,/dev/sdb3

to get it to boot, even though the mdadm hook is in /etc/mkinitcpio.conf.

More info here: http://www.linuxquestions.org/questions/showthread.php?p=4147009

--MALDATA 10:44, 02 November 2010 (CST)

Migrating from PATA w/o RAID to SATA with Software RAID (mdadm)

After recreating the /dev nodes (null, console and random), tried to set up a software raid0 with two SATA disks. It was a royal pain in the ass, but playing around with hooks, what really worked was:

HOOKS="base udev sata mdadm filesystems autodetect".

Note the order: sata before mdadm, not after (as is said over there).


MSI Neo4-F and two SATA disks (western digital and seagate) connected through SATA (non sata2), ArchLinux x64 and a root partition RAIDed 0. Kernel 2.6.30-4.

--PeGa! 01:31, 18 August 2009 (EDT)----

"archive your partition scheme"

I agree that this is a good idea, but I would caution against dumping the exact partition table back onto a new disk unless you know the disk to be physically identical in all respects to the old one. Usually you'll have a disk from a different manufacturer or a different production run and the total number of sectors (and thus the geometry) will be a little different.

It's best to re-do the partition table, so that you can make sure it's all nice & aligned with the geometry (some things don't like it when partition boundaries aren't aligned, e.g. you'll get warning messages on boot and cfdisk will hate you).

If you're re-adding the new partitions to existing RAID arrays, make sure you err on the side of making them a little bigger rather than a little smaller -- it sucks when they don't fit :)

Fixed mdadm --scan calls.

As per discussion on the arch-general mailing list, it was clear that the calls to mdadm happened at the wrong time.

Mdadm --scan has to be called after mounting (which probaly happens while the installer is already running), but before mkinitcpio runs, so that mkinitcpio can incorporate mdadm.conf into the initrd.

Therefore, I have moved the first mentioned call from running before /arch/setup to running during the setup on a 2nd terminal. I have removed the 2nd call after grub setup entirely dince by then it would already be too late.

--Jinks 18:34, 13 January 2010 (EST)

Transform and Update Article - 2011

This article could prove to be extremely helpful, but it is outdated. Although that fact has been noted numerous times here and in the forums that task hasn't been taken on in earnest. I believe it should be transformed into a Software RAID specific article, with a short section referencing the more thorough and accurate LVM article. A lot of the information in this article is redundant, and outdated compared to the LVM article. Some steps to be taken:

  1. Rename article to Software RAID
  2. Reference other more current/detailed articles, remove redundant information
  3. Remove the Outline section
  4. Add Required Software in summary template

~ Filam 08:21, 30 August 2011 (EDT)

I flagged the article as out-of-date. Rather than slowly make major revisions, which could cause the article to become inconsistent, I've created a new version in my user space, User:Filam/RAID. You're welcome to contribute edits there. ~ Filam 11:44, 31 August 2011 (EDT)

Create sysfs partition

If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. Otherwise you can skip this mounting of sysfs, unless you run into trouble. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times.

To mount the sysfs partition, do:

# mkdir /sys
# mount -t sysfs none /sys
I will be removing the preceding text from the main article as it is no longer relevant, but wanted to keep an easily accessible and searchable record of it. ~ Filam 16:13, 31 August 2011 (EDT)

Kernel parameters

Then specify the raid array you are booting from in /mnt/boot/grub/menu.lst like:

# Example with /dev/array/root for / & /dev/md1 for /boot:
  kernel /vmlinuz-linux root=/dev/array/root ro  md=1,/dev/sda1,/dev/sdb1,/dev/sdc1 md=0,/dev/sda3,/dev/sdb3,/dev/sdc3
Again, I will be removing the preceding text from the main article as it is no longer relevant, but wanted to keep an easily accessible and searchable record of it. ~ Filam 17:20, 31 August 2011 (EDT)

Install Grub with pre-2009 ISOs

This is the last and final step before you have a bootable system!

As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you are effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive.

Copy the GRUB files into place and get into our chroot:

# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub
# sync
# mount -o bind /dev /mnt/dev
# mount -t proc none /mnt/proc
# mount -t sysfs none /mnt/sys
# chroot /mnt /bin/bash

At this point, you may no longer be able to see keys you type at your console. I am not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing reset at the prompt.

Once you have got console echo back on, type:

# grub

After a short wait while grub does some looking around, it should come back with a grub prompt. Do:

grub> root (hd0,0)
grub> setup (hd0)
grub> quit

That is it. You can exit your chroot now by hitting CTRL-D or typing exit.

Once again, I will be removing the preceding text from the main article as it is no longer relevant, but wanted to keep an easily accessible and searchable record of it. ~ Filam 11:00, 1 September 2011 (EDT)

Installation image

Which installation image should be used (i.e. Netinstall or Core)? As lilsirecho writes on the forum: "This action occurs when utilizing ftp install which installs the latest kernel rather than the very old kernel in the 2010-05 .iso. ... If you are not using ftp install grub2 may not be compatible with the kernel in 2010-05 .iso." ~ Filam 13:54, 1 September 2011 (EDT)

I don't know a thing about groub2 or RAID, but we now have new official images, from 2011.08.19 - the forum discussion you mention took place 2 weeks before their release. -- Karol 14:14, 1 September 2011 (EDT)

Big fat raid5 warning

I have added this. If anyone disagrees, please discuss it here. Thanks.
-- Voidzero 16:18, 10 October 2011 (EDT)

The warning has been moved to RAID. Please add new discussions at the bottom of talk pages, thank you. -- Kynikos 07:12, 11 October 2011 (EDT)
Looks good, thanks. -- Voidzero