Difference between revisions of "Talk:Software RAID and LVM"

From ArchWiki
Jump to: navigation, search
(Update mdadm.conf: re)
(mdadm mostly does the right thing?)
(21 intermediate revisions by 6 users not shown)
Line 1: Line 1:
 
This article is an updated version of the [[Installing with Software RAID or LVM]] article. That page has been redirected to this page. The most recent revision of the old article can be found [https://wiki.archlinux.org/index.php?title=Installing_with_Software_RAID_or_LVM&oldid=154628 here].
 
This article is an updated version of the [[Installing with Software RAID or LVM]] article. That page has been redirected to this page. The most recent revision of the old article can be found [https://wiki.archlinux.org/index.php?title=Installing_with_Software_RAID_or_LVM&oldid=154628 here].
  
== Performance of Swap Array ==
+
{{note|Numerous sections on this talk page were archived to [[/Archive 1]] on May 22, 2012.}}
It was pointed out here:
+
http://bbs.archlinux.org/viewtopic.php?p=121424#121495
+
that software RAIDing your swap is not useful, and even slows performance.
+
: I added a title to the above comment by [[User:Jstech|Jstech]] on [https://wiki.archlinux.org/index.php?title=Talk:Installing_with_Software_RAID_or_LVM&oldid=5390 1 November 2005]. Although, the above link is broken, it is supported by the [http://en.gentoo-wiki.com/wiki/RAID/Software#Create_the_Swap_Partition Gentoo Wiki article]. This is a reminder to make a note of it in the article. ~ [[User:Filam|Filam]] 22:09, 29 August 2011 (EDT)
+
  
== bug in /etc/rc.sysinit?? ==
+
== Installation image ==
 +
Which [https://www.archlinux.org/download/ installation image] should be used (i.e. Netinstall or Core)? As '''lilsirecho''' writes on [https://bbs.archlinux.org/viewtopic.php?id=123698 the forum]: "''This action occurs when utilizing ftp install which installs the latest kernel rather than the  very old kernel in the 2010-05 .iso. ... If you are not using ftp install grub2 may not be compatible with the kernel in 2010-05 .iso.''" ~ [[User:Filam|Filam]] 13:54, 1 September 2011 (EDT)
 +
:I don't know a thing about groub2 or RAID, but we now have new official images, from 2011.08.19 - the forum discussion you mention took place 2 weeks before their release. -- [[User:Karol|Karol]] 14:14, 1 September 2011 (EDT)
 +
::What's the status of this issue? [https://mailman.archlinux.org/pipermail/arch-releng/2012-May/002538.html We may have a new official image soon]. -- [[User:Karol|Karol]] ([[User talk:Karol|talk]]) 20:41, 7 May 2012 (UTC)
  
I think there is a problem with /etc/rc.sysinit as it not loads the module for the device mapper. This is lvm specific. I just modified the file like this:
+
== Update mdadm.conf==
 +
The current instruction fails: the correct method is:
 +
  # mdadm --examine --scan > /mnt/etc/mdadm.conf
 +
I'll edit the page if a) someone can confirm it (it does work for me), and b) it should be added here or on [[User:Kynikos|Kynikos]]'s page...
 +
[[User:Jasonwryan|Jasonwryan]] 17:23, 20 February 2012 (EST)
 +
:Hi Jason, can you explain why you believe that is the correct directory? The man page states that {{ic|/etc/mdadm.conf}} and {{ic|/etc/mdadm/mdadm.conf}} are the only defaults. Otherwise you would need to use the {{ic|-c}} or {{ic|--config}} flag. And why would you add that edit to Kynikos's User page? ~ [[User:Filam|Filam]] 23:37, 20 February 2012 (EST)
 +
::I recently reinstalled with Raid/LVM/LUKS and when manually updating files from outside the installer (ie, another TTY), I had to preface the directory with /mnt/ to access the correct file - I assumed it had to do with the way the installer sets up a chroot. I meant Kynikos's alternate RAID page (sorry, that wasn't clear) [[User:Jasonwryan|Jasonwryan]] 23:56, 20 February 2012 (EST)
 +
:::I don't think it's clear yet... Maybe a link will drive away any doubt :) -- [[User:Kynikos|Kynikos]] 07:18, 21 February 2012 (EST)
 +
::::Perhaps I am hallucinating: I thought you had a version of this page under development? [[User:Jasonwryan|Jasonwryan]] 16:21, 21 February 2012 (EST)
 +
:::::Do you mean the [[RAID]] article? Both articles should make note of it. Unfortunately, I don't have time to reproduce it. Am I right in assuming that reproducing it would require someone to build a new RAID using the Arch Linux installation media?
 +
::::::Yes: that is correct. This blog article is what tipped me off http://yannickloth.be/blog/2010/08/01/installing-archlinux-with-software-raid1-encrypted-filesystem-and-lvm2/ [[User:Jasonwryan|Jasonwryan]] 16:21, 21 February 2012 (EST)
 +
::::Ok, I made a couple changes to the RAID article. Jason, can you append a paragraph to [[RAID#Update RAID configuration]] to the effect of "If you are updating the configuration file from the Arch Linux installation media you may need to ...". And then append a brief note template to [[Software RAID and LVM#Update RAID configuration]] with a link to the RAID article?
 +
::::Thanks for taking the time to start a discussion section! ~ [[User:Filam|Filam]] 13:41, 21 February 2012 (EST)
 +
:::::Yes - no problem [[User:Jasonwryan|Jasonwryan]] 16:21, 21 February 2012 (EST)
  
if [ "$USELVM" = "yes" -o "$USELVM" = "YES" ]; then
+
== Much adon't about [/]swap ==
if [ -f /etc/lvmtab -a -x /sbin/vgchange ]; then
+
    # Kernel 2.4.x, LVM1 groups
+
    stat_busy "Activating LVM1 groups"
+
    /sbin/vgchange -a y
+
    stat_done
+
  elif [ -x /sbin/lvm -a -d /sys/block ]; then
+
    # Kernel 2.6.x, LVM2 groups
+
    stat_busy "Activating LVM2 groups"
+
    /sbin/modprobe dm_mod # <<-----Here is my change
+
    /sbin/lvm vgscan --ignorelockingfailure
+
    /sbin/lvm vgchange --ignorelockingfailure -a y
+
    stat_done
+
  fi
+
fi
+
  
i use the standard arch kernel
+
Assumption 1: '/swap' is a ''horribly'' confused way of referring to swap space.
  
== What about root on LVM ? ==
+
Assumption 2: The md1 array (and only the md1 array) is intended to be used for '/swap'.
I can't figure out how to make it...
+
mkinitrd variable LVM_ROOT= what to set here??
+
i am using grub and partition is like this:
+
  
<div>
+
If my assumptions are correct, I expect this guide has unwittingly led a lot of people to put their '/swap' on md0 and spread their /home across md0 and md1.
  /dev/sda1 ext2 /boot
+
  /dev/sda2 LVM
+
        vg_name: linux
+
        lv_name:    system  /    ext3
+
                      home   /home ext3
+
                      swap  none  sw
+
</div>
+
  
I've tryed
+
The following command does what the guide seems to intend: devote all ''free'' PEs of md1 to lvswap:
LVM_ROOT=/dev/linux/system,
+
grub kernel ... root=/dev/mapper/linux-system or root=/dev/linux/system
+
i have USELVM=YES and in initrd LVM is enabled...
+
  
lvm partitions was made with lvm2 - but i can activate and mount them manually
+
{{ic|lvcreate VolGroupArray -l 100%PV /dev/md1 -n lvswap}}
where am i wrong??? -  
+
still blocked in bussybox with err - can't mount root - or so... or can't switch [don't remember] -
+
lvm partitions are disabled - i have to enable and mount them manually
+
  
--[[User:Suw|Suw]] 06:24, 8 April 2006 (EDT)
+
- [[User:Alphaniner|Alphaniner]] ([[User talk:Alphaniner|talk]]) 18:43, 22 April 2013 (UTC)
 +
:Hi '''Alphaniner''', I really appreciate your input to this article. I haven't looked at this article in a while, but I believe I understand what you're talking about.
 +
:It looks like arrays are created for the swap space ({{ic|md1}}) and root filesystem ({{ic|md0}}), but the former is never used later in the article. Instead a swap partition is created on the VG that sits on top of {{ic|md0}}. Is that why you would think a reader would setup the swap space on {{ic|md0}}?
 +
:Can you explain why {{ic|/home}} would be spread across {{ic|md0}} and {{ic|md1}}? It doesn't look like {{ic|md1}} is added to the Volume Group or even referenced in the article after its creation (which seems to be a related issue).
 +
:~ [[User:Filam|Filam]] ([[User talk:Filam|talk]]) 15:54, 30 April 2013 (UTC)
 +
::The wording of the article - ie. "Make the '''RAIDs''' accessible to LVM by converting '''them''' into physical '''volumes'''" and "Next step is to create a volume group (VG) on the '''PVs'''" - suggests both RAIDs are to be 'made into' PVs, and both PVs are to be 'put into' the VG. Sure, the commands don't reflect this, but they don't exclude it either. And this is Arch, after all... one can't expect one is being fed all necessary commands. OTOH, the fact that the creation of lvswap is optional contradicts this interpretation, all the way back to the creation of md1.
 +
::How ever I look at it, though, I think this article has some fundamental flaws. And I think rectifying these will involve a lot of rework. Seeing as I've never done any significant wiki-work, I figured it would be sensible to have a discussion.
 +
::- [[User:Alphaniner|Alphaniner]] ([[User talk:Alphaniner|talk]]) 18:56, 30 April 2013 (UTC)
 +
:::I completely agree and certainly appreciate that you started the discussion.
 +
:::The question I'm left with is whether it makes sense to add the swap array to the LVM at all. The only benefit I can think of is that the user can then reduce the swap space if the RAM is upgraded. If less swap space is required then that would allow the root filesystem to use part of the swap array and then there is no separation between the swap space and the root filesystem on the disc.
 +
:::And then if you ignored the LVM, does it makes sense to use a different RAID level for the swap partitions than the root partitions?
 +
:::~ [[User:Filam|Filam]] ([[User talk:Filam|talk]]) 15:11, 1 May 2013 (UTC)
 +
::::I think the practical thing would be to remove all references to md1 and just describe a simple one RAID configuration, with the creation of lvswap remaining optional. Maybe the "Swap space" section could be expanded a bit to describe other ways of dealing with swap (separate array, discrete physical partitions), but I don't think even that is necessary.
 +
::::-[[User:Alphaniner|Alphaniner]] ([[User talk:Alphaniner|talk]]) 19:04, 2 May 2013 (UTC)
 +
:::::Sounds good to me. But isn't the following statement from the article valid?
 +
:::::"Creating the swap space on a separate array is not intended to provide additional redundancy, but instead, to prevent a corrupt swap space from rendering the system inoperable, which is more likely to happen when the swap space is located on the same partition as the root directory."
 +
:::::~ [[User:Filam|Filam]] ([[User talk:Filam|talk]]) 20:43, 2 May 2013 (UTC)
 +
::::::I had forgotten about that. The article is the only place I'd ever heard it, so I don't know if it's true or not. If it is true - and a legitimate concern - then certainly we shouldn't be making any suggestions that would result in that. I guess this would mean either the md1 method or using physical partitions.
 +
::::::- [[User:Alphaniner|Alphaniner]] ([[User talk:Alphaniner|talk]]) 21:57, 2 May 2013 (UTC)
  
:The topic of how to stack LVM and RAID is covered in [http://serverfault.com/questions/217666/what-is-better-lvm-on-raid-or-raid-on-lvm What is better LVM on RAID or RAID on LVM?] on [[Wikipedia:Server Fault|Server Fault]]. I added a link to the resource section earlier today. ~ [[User:Filam|Filam]] 22:12, 29 August 2011 (EDT)
+
== Optimum RAID link now goes to paid site ==
  
== Lousy guide ==
+
Under [[Software_RAID_and_LVM#Prepare_hard_drive]] is an external link, "Optimum RAID", at linuxpromagazine.com, which hosted the article in HTML format until sometime in the last couple weeks. It now requires a paid subscription to view the article as PDF, which is about as far from "open source community" as it gets. :-\  The link should probably be removed.
This has to be the worst, most crappy designed guide i have seen in all of my time with OpenSource, restructure this thing, and strip the old stuff for 7.1, it is VERY outdated. I must say that i would feel more confident just jumping head on out into raid on my own, than following this guid, get som structure in it. --[[User:Kbutcher5|Kbutcher5]] [https://wiki.archlinux.org/index.php?title=Talk%3AInstalling_with_Software_RAID_or_LVM&action=historysubmit&diff=23700&oldid=11010 13:23, 4 May 2007]
+
  
:I did an install today and used this guide as a guideline, next to my common sense and other documentation on the mighty internet. Looking at this guide from an abstract point of view one will find that the principles are still the same. Going into basic details, you'll also get the gist. But several specifics are inaccurate and the overall structure could be improved. The guidance goes flaky once you get to the "Install and Configure Arch" part and beyond. Because it is so outdated.
+
In its place... I dunno.  Is there a good writeup elsewhere about measuring and figuring out the best stripe/width/etc mkfs parameters to use with RAID and LVM?  For that matter, is it still required or is mkfs smart enough to Do The Right Thing when confronted with a device like this?
:That's where I had to use outside sources and apply my own knowhow to get the system running.
+
[[User:Superblocked|Superblocked]] ([[User talk:Superblocked|talk]]) 17:43, 22 May 2013 (UTC)
:Perhaps i'll edit some bits here and there to patch it up a little. I have a general idea of what I did to get it right. And i'll enter a new GRUB example, because the kernel with the mdadm hook can detect arrays, or get it right by reading /etc/mdadm.conf. --[[User:Ultraman|Ultraman]] 2:47, 3 May 2009 (CEST)
+
  
::I added a signature to [[User:Kbutcher5|Kbutcher5]]'s original comment and formatted [[User:Ultraman|Ultraman]]'s post to reflect the fact that it was a response to ''Kbutcher5''. More importantly, I wanted to note that it is important to contribute to the article. If we don't contribute it will remain outdated. Even if you don't have time to make numerous edits, at least leave a link to a better tutorial or guide in the [[Installing with Software RAID or LVM#Additional Resources|Additional Resources]] setion. ~ [[User:Filam|Filam]] 22:22, 29 August 2011 (EDT)
+
: How about [http://www.linas.org/linux/Software-RAID/Software-RAID-8.html this FAQ]? In general, I think that mdadm does the Right Thing to get decent performance when it's on top of LVM etc... I don't have the references right now to back that up, though. [[User:Giddie|Giddie]] ([[User talk:Giddie|talk]]) 08:50, 23 May 2013 (UTC)
 
+
== rebuild from chroot to avoid getting dropped to ramfs after grub install ==
+
 
+
for some reason the initcpio was not giving me my raid volumes after i tried to boot into the new system as instructed by the article.  this problem went away after i used the install cd to boot, loaded raid1, raid5, and dm_mod modules manually, assembled arrays, activated the volume group, mounted the partitions, chrooted into the new system and rebuilt kernel26 (and therefore the initcpio).  notably, i needed to mount /sys in addition to /proc and /dev prior to chrooting in order to get this to work.  i only mention this because it seems like this article and several others omit /sys as a source of device files when instructing users to chroot.  assuming that grub desires some access to devices when installing, i am wondering why only /proc and /dev are sufficient in the example outlined in the article but not in my case?
+
 
+
NB- in the course of my troubleshooting i added definition of the raid1 array on which my / is located to the kernel line in menu.lst and i can say that this alone is insufficient to confer bootability to the installation as defined in the article.  i have not tried my configuration without this (it shouldn't be necessary with the mdadm hook in mkinitcpio.conf)
+
 
+
--[[User:Poopship21|Poopship21]] 21:24, 28 June 2009 (EDT)
+
 
+
== The "mdadm"  approach doesn't work well with raid0 ==
+
 
+
Note that loading the "mdadm" hook doesn't always work with raid0. I've found out the hard way that this wiki just isn't complete.
+
 
+
I had to load the "raid"  hook instaid of the "mdadm" hook and had to load it ''before'' autodetect for it to work, like so:
+
<div>
+
 
+
HOOKS="base udev raid autodetect pata scsi sata filesystems"
+
 
+
</div>
+
I puzzled this together by combining the wiki together with a "how to set up RAID during installation" guide I found on the ARCH forums.
+
 
+
I had to do this in a chroot and only really used the setup for the packages.
+
 
+
--[[User:Thajan|Thajan]] 13:07, 24 July 2009 (EDT)
+
 
+
I had a similar problem with RAID1. This page says that the "old style" requires kernel parameters, but nowadays it doesn't. As of 2010.05, I still need a kernel line like
+
 
+
<div>
+
kernel /vmlinuz26 root=/dev/md3 ro md=1,/dev/sda1,/dev/sdb1 md=3,/dev/sda3,/dev/sdb3
+
</div>
+
 
+
to get it to boot, even though the mdadm hook is in /etc/mkinitcpio.conf.
+
 
+
More info here: http://www.linuxquestions.org/questions/showthread.php?p=4147009
+
 
+
--[[User:MALDATA|MALDATA]] 10:44, 02 November 2010 (CST)
+
 
+
== Migrating from PATA w/o RAID to SATA with Software RAID (mdadm) ==
+
 
+
After recreating the /dev nodes (null, console and random), tried to set up a software raid0 with two SATA disks.
+
It was a ''royal pain in the ass'', but playing around with hooks, what really worked was:
+
 
+
HOOKS="base udev sata mdadm filesystems autodetect".
+
 
+
Note the order: sata '''before''' mdadm, not after (as is said over there).
+
 
+
System:
+
 
+
MSI Neo4-F and two SATA disks (western digital and seagate) connected through SATA (non sata2), ArchLinux x64 and a root partition RAIDed 0. Kernel 2.6.30-4.
+
 
+
--[[User:PeGa!|PeGa!]] 01:31, 18 August 2009 (EDT)----
+
 
+
== "archive your partition scheme" ==
+
 
+
I agree that this is a good idea, but I would caution against dumping the exact partition table back onto a new disk unless you know the disk to be physically identical in all respects to the old one. Usually you'll have a disk from a different manufacturer or a different production run and the total number of sectors (and thus the geometry) will be a little different.
+
 
+
It's best to re-do the partition table, so that you can make sure it's all nice & aligned with the geometry (some things don't like it when partition boundaries aren't aligned, e.g. you'll get warning messages on boot and cfdisk will hate you).
+
 
+
If you're re-adding the new partitions to existing RAID arrays, make sure you err on the side of making them a little bigger rather than a little smaller -- it sucks when they don't fit :)
+
 
+
== Fixed mdadm --scan calls. ==
+
 
+
As per discussion on the arch-general mailing list, it was clear that the calls to mdadm happened at the wrong time.
+
 
+
Mdadm --scan has to be called '''after''' mounting (which probaly happens while the installer is already running), but before mkinitcpio runs, so that mkinitcpio can incorporate mdadm.conf into the initrd.
+
 
+
Therefore, I have moved the first mentioned call from running before /arch/setup to running during the setup on a 2nd terminal.
+
I have removed the 2nd call after grub setup entirely dince by then it would already be too late.
+
 
+
--[[User:Jinks|Jinks]] 18:34, 13 January 2010 (EST)
+
 
+
== Transform and Update Article - 2011 ==
+
 
+
This article could prove to be extremely helpful, but it is outdated. Although that fact has been noted numerous times here and in the forums that task hasn't been taken on in earnest. I believe it should be transformed into a Software RAID specific article, with a short section referencing the more thorough and accurate [[LVM]] article. A lot of the information in this article is redundant, and outdated compared to the [[LVM]] article. Some steps to be taken:
+
# Rename article to '''Software RAID'''
+
# Reference other more current/detailed articles, remove redundant information
+
# Remove the [[Installing with Software RAID or LVM#Outline|Outline]] section
+
# Add '''Required Software''' in summary template
+
~ [[User:Filam|Filam]] 08:21, 30 August 2011 (EDT)
+
:I flagged the article as out-of-date. Rather than slowly make major revisions, which could cause the article to become inconsistent, I've created a new version in my user space, [[User:Filam/RAID]]. You're welcome to contribute edits there. ~ [[User:Filam|Filam]] 11:44, 31 August 2011 (EDT)
+
 
+
== Create sysfs partition ==
+
 
+
''If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. Otherwise you can skip this mounting of sysfs, unless you run into trouble. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times.''
+
 
+
''To mount the sysfs partition, do:''
+
# mkdir /sys
+
# mount -t sysfs none /sys
+
 
+
:I will be removing the preceding text from the main article as it is no longer relevant, but wanted to keep an easily accessible and searchable record of it. ~ [[User:Filam|Filam]] 16:13, 31 August 2011 (EDT)
+
 
+
== Kernel parameters ==
+
 
+
''Then specify the raid array you are booting from in /mnt/boot/grub/menu.lst like:''
+
# Example with /dev/array/root for / & /dev/md1 for /boot:
+
  kernel /vmlinuz-linux root=/dev/array/root ro  md=1,/dev/sda1,/dev/sdb1,/dev/sdc1 md=0,/dev/sda3,/dev/sdb3,/dev/sdc3
+
:Again, I will be removing the preceding text from the main article as it is no longer relevant, but wanted to keep an easily accessible and searchable record of it. ~ [[User:Filam|Filam]] 17:20, 31 August 2011 (EDT)
+
 
+
== Install Grub with pre-2009 ISOs ==
+
 
+
''This is the last and final step before you have a bootable system!''
+
 
+
''As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you are effectively inside your new system.  Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive.''
+
 
+
''Copy the GRUB files into place and get into our chroot:''
+
# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub
+
# sync
+
# mount -o bind /dev /mnt/dev
+
# mount -t proc none /mnt/proc
+
# mount -t sysfs none /mnt/sys
+
# chroot /mnt /bin/bash
+
 
+
''At this point, you may no longer be able to see keys you type at your console. I am not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing <code>reset</code> at the prompt.''
+
 
+
''Once you have got console echo back on, type:''
+
# grub
+
 
+
''After a short wait while grub does some looking around, it should come back with a grub prompt. Do:''
+
grub> root (hd0,0)
+
grub> setup (hd0)
+
grub> quit
+
 
+
''That is it.  You can exit your chroot now by hitting <code>CTRL-D</code> or typing <code>exit</code>.''
+
:Once again, I will be removing the preceding text from the main article as it is no longer relevant, but wanted to keep an easily accessible and searchable record of it. ~ [[User:Filam|Filam]] 11:00, 1 September 2011 (EDT)
+
 
+
== Installation image ==
+
Which [http://www.archlinux.org/download/ installation image] should be used (i.e. Netinstall or Core)? As '''lilsirecho''' writes on [https://bbs.archlinux.org/viewtopic.php?id=123698 the forum]: "''This action occurs when utilizing ftp install which installs the latest kernel rather than the  very old kernel in the 2010-05 .iso. ... If you are not using ftp install grub2 may not be compatible with the kernel in 2010-05 .iso.''" ~ [[User:Filam|Filam]] 13:54, 1 September 2011 (EDT)
+
:I don't know a thing about groub2 or RAID, but we now have new official images, from 2011.08.19 - the forum discussion you mention took place 2 weeks before their release. -- [[User:Karol|Karol]] 14:14, 1 September 2011 (EDT)
+
 
+
 
+
==<s>Big fat raid5 warning</s>==
+
 
+
I have added this. If anyone disagrees, please discuss it here. Thanks.
+
<br>-- [[User:Voidzero|Voidzero]] 16:18, 10 October 2011 (EDT)
+
:The warning has been moved to [[RAID]]. Please add new discussions at the bottom of talk pages, thank you. -- [[User:Kynikos|Kynikos]] 07:12, 11 October 2011 (EDT)
+
:: Looks good, thanks. -- [[User:Voidzero|Voidzero]]
+
 
+
== Update mdadm.conf==
+
The current instruction fails: the correct method is:
+
  # mdadm --examine --scan > /mnt/etc/mdadm.conf
+
I'll edit the page if a) someone can confirm it (it does work for me), and b) it should be added here or on [[User:Kynikos|Kynikos]]'s page...
+
[[User:Jasonwryan|Jasonwryan]] 17:23, 20 February 2012 (EST)
+
:Hi Jason, can you explain why you believe that is the correct directory? The man page states that {{ic|/etc/mdadm.conf}} and {{ic|/etc/mdadm/mdadm.conf}} are the only defaults. Otherwise you would need to use the {{ic|-c}} or {{ic|--config}} flag. And why would you add that edit to Kynikos's User page? ~ [[User:Filam|Filam]] 23:37, 20 February 2012 (EST)
+
::I recently reinstalled with Raid/LVM/LUKS and when manually updating files from outside the installer (ie, another TTY), I had to preface the directory with /mnt/ to access the correct file - I assumed it had to do with the way the installer sets up a chroot. I meant Kynikos's alternate RAID page (sorry, that wasn't clear) [[User:Jasonwryan|Jasonwryan]] 23:56, 20 February 2012 (EST)
+
:::I don't think it's clear yet... Maybe a link will drive away any doubt :) -- [[User:Kynikos|Kynikos]] 07:18, 21 February 2012 (EST)
+

Revision as of 08:50, 23 May 2013

This article is an updated version of the Installing with Software RAID or LVM article. That page has been redirected to this page. The most recent revision of the old article can be found here.

Note: Numerous sections on this talk page were archived to /Archive 1 on May 22, 2012.

Installation image

Which installation image should be used (i.e. Netinstall or Core)? As lilsirecho writes on the forum: "This action occurs when utilizing ftp install which installs the latest kernel rather than the very old kernel in the 2010-05 .iso. ... If you are not using ftp install grub2 may not be compatible with the kernel in 2010-05 .iso." ~ Filam 13:54, 1 September 2011 (EDT)

I don't know a thing about groub2 or RAID, but we now have new official images, from 2011.08.19 - the forum discussion you mention took place 2 weeks before their release. -- Karol 14:14, 1 September 2011 (EDT)
What's the status of this issue? We may have a new official image soon. -- Karol (talk) 20:41, 7 May 2012 (UTC)

Update mdadm.conf

The current instruction fails: the correct method is:

 # mdadm --examine --scan > /mnt/etc/mdadm.conf

I'll edit the page if a) someone can confirm it (it does work for me), and b) it should be added here or on Kynikos's page... Jasonwryan 17:23, 20 February 2012 (EST)

Hi Jason, can you explain why you believe that is the correct directory? The man page states that /etc/mdadm.conf and /etc/mdadm/mdadm.conf are the only defaults. Otherwise you would need to use the -c or --config flag. And why would you add that edit to Kynikos's User page? ~ Filam 23:37, 20 February 2012 (EST)
I recently reinstalled with Raid/LVM/LUKS and when manually updating files from outside the installer (ie, another TTY), I had to preface the directory with /mnt/ to access the correct file - I assumed it had to do with the way the installer sets up a chroot. I meant Kynikos's alternate RAID page (sorry, that wasn't clear) Jasonwryan 23:56, 20 February 2012 (EST)
I don't think it's clear yet... Maybe a link will drive away any doubt :) -- Kynikos 07:18, 21 February 2012 (EST)
Perhaps I am hallucinating: I thought you had a version of this page under development? Jasonwryan 16:21, 21 February 2012 (EST)
Do you mean the RAID article? Both articles should make note of it. Unfortunately, I don't have time to reproduce it. Am I right in assuming that reproducing it would require someone to build a new RAID using the Arch Linux installation media?
Yes: that is correct. This blog article is what tipped me off http://yannickloth.be/blog/2010/08/01/installing-archlinux-with-software-raid1-encrypted-filesystem-and-lvm2/ Jasonwryan 16:21, 21 February 2012 (EST)
Ok, I made a couple changes to the RAID article. Jason, can you append a paragraph to RAID#Update RAID configuration to the effect of "If you are updating the configuration file from the Arch Linux installation media you may need to ...". And then append a brief note template to Software RAID and LVM#Update RAID configuration with a link to the RAID article?
Thanks for taking the time to start a discussion section! ~ Filam 13:41, 21 February 2012 (EST)
Yes - no problem Jasonwryan 16:21, 21 February 2012 (EST)

Much adon't about [/]swap

Assumption 1: '/swap' is a horribly confused way of referring to swap space.

Assumption 2: The md1 array (and only the md1 array) is intended to be used for '/swap'.

If my assumptions are correct, I expect this guide has unwittingly led a lot of people to put their '/swap' on md0 and spread their /home across md0 and md1.

The following command does what the guide seems to intend: devote all free PEs of md1 to lvswap:

lvcreate VolGroupArray -l 100%PV /dev/md1 -n lvswap

- Alphaniner (talk) 18:43, 22 April 2013 (UTC)

Hi Alphaniner, I really appreciate your input to this article. I haven't looked at this article in a while, but I believe I understand what you're talking about.
It looks like arrays are created for the swap space (md1) and root filesystem (md0), but the former is never used later in the article. Instead a swap partition is created on the VG that sits on top of md0. Is that why you would think a reader would setup the swap space on md0?
Can you explain why /home would be spread across md0 and md1? It doesn't look like md1 is added to the Volume Group or even referenced in the article after its creation (which seems to be a related issue).
~ Filam (talk) 15:54, 30 April 2013 (UTC)
The wording of the article - ie. "Make the RAIDs accessible to LVM by converting them into physical volumes" and "Next step is to create a volume group (VG) on the PVs" - suggests both RAIDs are to be 'made into' PVs, and both PVs are to be 'put into' the VG. Sure, the commands don't reflect this, but they don't exclude it either. And this is Arch, after all... one can't expect one is being fed all necessary commands. OTOH, the fact that the creation of lvswap is optional contradicts this interpretation, all the way back to the creation of md1.
How ever I look at it, though, I think this article has some fundamental flaws. And I think rectifying these will involve a lot of rework. Seeing as I've never done any significant wiki-work, I figured it would be sensible to have a discussion.
- Alphaniner (talk) 18:56, 30 April 2013 (UTC)
I completely agree and certainly appreciate that you started the discussion.
The question I'm left with is whether it makes sense to add the swap array to the LVM at all. The only benefit I can think of is that the user can then reduce the swap space if the RAM is upgraded. If less swap space is required then that would allow the root filesystem to use part of the swap array and then there is no separation between the swap space and the root filesystem on the disc.
And then if you ignored the LVM, does it makes sense to use a different RAID level for the swap partitions than the root partitions?
~ Filam (talk) 15:11, 1 May 2013 (UTC)
I think the practical thing would be to remove all references to md1 and just describe a simple one RAID configuration, with the creation of lvswap remaining optional. Maybe the "Swap space" section could be expanded a bit to describe other ways of dealing with swap (separate array, discrete physical partitions), but I don't think even that is necessary.
-Alphaniner (talk) 19:04, 2 May 2013 (UTC)
Sounds good to me. But isn't the following statement from the article valid?
"Creating the swap space on a separate array is not intended to provide additional redundancy, but instead, to prevent a corrupt swap space from rendering the system inoperable, which is more likely to happen when the swap space is located on the same partition as the root directory."
~ Filam (talk) 20:43, 2 May 2013 (UTC)
I had forgotten about that. The article is the only place I'd ever heard it, so I don't know if it's true or not. If it is true - and a legitimate concern - then certainly we shouldn't be making any suggestions that would result in that. I guess this would mean either the md1 method or using physical partitions.
- Alphaniner (talk) 21:57, 2 May 2013 (UTC)

Optimum RAID link now goes to paid site

Under Software_RAID_and_LVM#Prepare_hard_drive is an external link, "Optimum RAID", at linuxpromagazine.com, which hosted the article in HTML format until sometime in the last couple weeks. It now requires a paid subscription to view the article as PDF, which is about as far from "open source community" as it gets.  :-\ The link should probably be removed.

In its place... I dunno. Is there a good writeup elsewhere about measuring and figuring out the best stripe/width/etc mkfs parameters to use with RAID and LVM? For that matter, is it still required or is mkfs smart enough to Do The Right Thing when confronted with a device like this? Superblocked (talk) 17:43, 22 May 2013 (UTC)

How about this FAQ? In general, I think that mdadm does the Right Thing to get decent performance when it's on top of LVM etc... I don't have the references right now to back that up, though. Giddie (talk) 08:50, 23 May 2013 (UTC)