https://wiki.archlinux.org/api.php?action=feedcontributions&user=Thedgm&feedformat=atomArchWiki - User contributions [en]2024-03-29T10:27:32ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=RVM&diff=246701RVM2013-02-07T13:22:05Z<p>Thedgm: uninstall pre-RVM gems from system</p>
<hr />
<div>[[Category:Development]]<br />
[[de:RVM]]<br />
[http://rvm.io/ RVM] (Ruby Version Manager) is a command line tool which allows us to easily install, manage and work with multiple [[Ruby]] environments from interpreters to sets of gems.<br />
<br />
There exists a similar application that you may also want to consider: [[rbenv]].<br />
<br />
== Installing RVM ==<br />
<br />
The install process is very easy, and is the very same for any distro, including Archlinux. You have two choices, one system-wide, another as a user. The first is for production servers, or if your are alone on your machine. You'll need root privileges. The second is the recommended for multiple users on the same machine (like a development test box). If you do not know which to choose, start with a single user installation.<br />
<br />
The upstream instructions for installing RVM should just work. The install script is aware enough to tell you what packages you need to install on Archlinux to make different rubies work. This usually involves gcc and some other stuff needed to compile ruby.<br />
<br />
As an observation, '''installing RVM with gem is not recommended anymore'''. This article uses the [http://rvm.beginrescueend.com/rvm/install/ recommended documentation] with minor tweaks to make it work on Archlinux.<br />
<br />
=== Pre-requisites ===<br />
<br />
Before starting, you will need the following to get the installation process going:<br />
<br />
$ pacman -S git curl<br />
<br />
=== Single-user installation ===<br />
<br />
{{Note|This will install to your home directory only (~/.rvm), and won't touch the standard Arch ruby package, which is in /usr.}}<br />
<br />
For most purposes, the recommended installation method is single-user, which is a self-contained RVM installation in a user's home directory.<br />
<br />
Use the script that rvm docs recommends to install. Make sure to run this script as the user for whom you want RVM installed (i.e. your normal user that you use for development).<br />
<br />
$ curl -L get.rvm.io | bash -s stable<br />
<br />
(to install a specific version replace ''stable'' with, for example, ''-- --version 1.13.0'')<br />
<br />
<br />
'''If''' instead you want to check the script before running it, do:<br />
<br />
$ curl -L get.rvm.io > rvm-install<br />
<br />
Inspect the file, then run it with:<br />
<br />
$ bash < ./rvm-install<br />
<br />
After the script has finished, then add the following line to the end of your ~/.bash_login or ~/.bashrc (or ~/.zprofile or whatever):<br />
<br />
$ <nowiki>[[ -s "$HOME/.rvm/scripts/rvm" ]]</nowiki> && source "$HOME/.rvm/scripts/rvm"<br />
<br />
Now, close out your current shell or terminal session and open a new one. (You may attempt reloading your ~/.bash_login with the following command:<br />
<br />
$ source ~/.bash_login<br />
<br />
However, closing out your current shell or terminal and opening a new one is the preferred way for initial installations.)<br />
<br />
=== Multi-user installation ===<br />
<br />
{{Note|This will install to /usr/local/rvm, and won't touch the standard Arch ruby package, which is in /usr. }}<br />
<br />
System-wide installation is a similar procedure to the single user install. However, instead run the install script with sudo. '''Do not run the installer directly as root!'''<br />
<br />
$ curl -L get.rvm.io | sudo bash -s stable<br />
<br />
(to install a specific version replace ''stable'' with, for example, ''-- --version 1.13.0'')<br />
<br />
After the script has finished, add yourself and your users to the 'rvm' group. (The installer does not auto-add any users to the rvm group. Admins must do this.) For each one, repeat:<br />
<br />
$ sudo usermod -a -G rvm <user><br />
<br />
'''Group memberships are only evaluated at login time'''. Log the users out, then back in. You too: close out your current shell or terminal session and open a new one. (You may attempt reloading your ~/.bash_login with the following command:<br />
<br />
$ source ~/.bash_login<br />
<br />
However, closing out your current shell or terminal and opening a new one is the preferred way for initial installations. Alternatively, you can use the "newgrp rvm" command and check with "id" to see whether the shell has picked up the new group membership of your user)<br />
<br />
<br />
{{Note|Remember to change the line [ [ -s $HOME/.rvm/scripts/rvm ] ] && source $HOME/.rvm/scripts/rvm to the system-wide location changing $HOME to /usr/local/}}<br />
<br />
<br />
RVM will be automatically configured for every user on the system (in opposite to the single-user installation); this is accomplished by loading /etc/profile.d/rvm.sh on login. Archlinux defaults to parsing /etc/profile which contains the logic to load all files residing in the /etc/profile.d/ directory.<br />
<br />
Before installing gems with multi-user rvm, make sure that /etc/gemrc does not have the line "gem: --user-install". If it does you need to comment it out otherwise the gems will install to the wrong place.<br />
<br />
'''You only use the sudo command during the install process'''. In multi-user configurations, any operations which require sudo access must use the ''rvmsudo'' command which preserves the RVM environment and passes this on to sudo. There are very few cases where rvmsudo is required once the core install is completed, except for when updating RVM itself. There is never a reason to use sudo post-install. rvmsudo should only be needed for updating with<br />
<br />
$ rvmsudo rvm get head<br />
<br />
===== A cautionary action =====<br />
<br />
In order to prevent the installation breakage by this cause, you may add this configuration to your /etc/sudoers file:<br />
<br />
{{bc|1=<br />
## Cmnd alias specification<br />
Cmnd_Alias RVM = /usr/local/rvm/rubies/<ruby_interpreter>/bin/gem, \<br />
/usr/local/rvm/rubies/<another_ruby_interpreter>/bin/gem, \<br />
/usr/local/rvm/bin/rvm<br />
<br />
## User privilege specification<br />
root ALL=(ALL) ALL<br />
<br />
## Uncomment to allow members of group wheel to execute any command<br />
%wheel ALL=(ALL) ALL, !RVM<br />
}}<br />
<br />
Where ''<ruby_interpreter>'' would be —for example— ruby-1.9.2-p290.<br />
<br />
== Post Installation ==<br />
<br />
After the installation, check everything worked with this command:<br />
<br />
$ type rvm | head -n1<br />
<br />
The response should be:<br />
<br />
$ rvm is a function<br />
<br />
If you receive rvm: not found, you may need to source your ~/.bash_login (or wherever you put the line above):<br />
<br />
$ . ~/.bash_login<br />
<br />
Check if the rvm function is working:<br />
<br />
$ rvm notes<br />
<br />
Finally, see if there are any dependency requirements for your installation by running:<br />
<br />
$ rvm requirements<br />
<br />
(Follow the returned instructions if any.)<br />
<br />
'''Very important''': whenever you upgrade RVM in the future, you should always run ''rvm notes'' and ''rvm requirements'' as this is usually where you will find details on any major changes and/or additional requirements '''to ensure your installation stays working'''.<br />
<br />
=== Some extras ===<br />
<br />
You may put in your ~/.bashrc the following lines to get some useful features:<br />
{{bc|1=<br />
# Display the current RVM ruby selection<br />
PS1="\$(/usr/local/rvm/bin/rvm-prompt) $PS1"<br />
<br />
# RVM bash completion<br />
<nowiki>[[ -r /usr/local/rvm/scripts/completion ]]</nowiki> && . /usr/local/rvm/scripts/completion<br />
}}<br />
<br />
Or if you're running as a single user:<br />
{{bc|1=<br />
# RVM bash completion<br />
<nowiki>[[ -r "$HOME/.rvm/scripts/completion" ]]</nowiki> && source "$HOME/.rvm/scripts/completion"<br />
}}<br />
<br />
== Using RVM ==<br />
<br />
The RVM documentation is ''quite'' comprehensive and explanatory. However, here are some RVM usage examples to get you started.<br />
<br />
=== Rubies ===<br />
<br />
==== Installing environments ====<br />
<br />
To see what Ruby environments are available to install, run:<br />
<br />
$ rvm list known<br />
<br />
To install one, run:<br />
<br />
$ rvm install <ruby_version><br />
<br />
For example, to install Ruby 1.9.2 one would run the following command:<br />
<br />
$ rvm install 1.9.2<br />
<br />
This should download, configure and install Ruby 1.9.2 in the place you installed RVM. For example, if you did a single user install, it will be in ~/.rvm/rubies/1.9.2.<br />
<br />
You can define a default ruby interpreter by doing:<br />
<br />
$ rvm use <ruby_version> --default<br />
<br />
If not, the default environment will be the system ruby in /usr —if you have installed one using pacman— or none.<br />
<br />
==== Switching environments ====<br />
<br />
To switch from one environment to another simply run:<br />
<br />
$ rvm use <ruby_version><br />
<br />
For example to switch to Ruby 1.8.7 one would run the following command:<br />
<br />
$ rvm 1.8.7<br />
<br />
(As you see, the flag ''use'' is not really necessary.)<br />
<br />
You should get a message telling you the switch worked. It can be confirmed by running:<br />
<br />
$ ruby --version<br />
<br />
Note that this environment will only be used in the current shell. You can open another shell and select a different environment for that one in parallel.<br />
<br />
In case you have set a default interpreter as explained above, you can do the switch with:<br />
<br />
$ rvm default<br />
<br />
==== System ruby ====<br />
<br />
If you wish the ruby interpreter that is outside RVM (i.e. the one installed in /usr by the standard Archlinux package), you can switch to it using:<br />
<br />
$ rvm system<br />
<br />
==== Listing environments ====<br />
<br />
To see all installed Ruby environments, run the following command:<br />
<br />
$ rvm list<br />
<br />
If you've installed a few rubies, this might generate a list like so:<br />
<br />
rvm Rubies<br />
jruby-1.5.0 [ [i386-java] ]<br />
=> ruby-1.8.7-p249 [ i386 ]<br />
ruby-1.9.2-head [ i386 ]<br />
System Ruby<br />
system [ i386 ]<br />
<br />
The ASCII arrow indicates which environment is currently enabled. In this case, it is Ruby 1.8.7. This could be confirmed by running:<br />
<br />
$ ruby --version<br />
ruby 1.8.7 (2010-01-10 patchlevel 249) [i686-linux]<br />
<br />
=== Gemsets ===<br />
<br />
RVM has a valued feature called gemsets which enables you to store different sets of gems in compartmentalized independent ruby setups. This means that ruby, gems and irb are all separate and self-contained from the system and each other.<br />
<br />
==== Creating ====<br />
<br />
Gemsets must be created before being used. To create a new gemset for the current ruby, do this:<br />
<br />
$ rvm use <ruby_version><br />
$ rvm gemset create <gemset_name><br />
<br />
Alternatively, if you prefer the shorthand syntax offered by rvm use, employ the --create option like so:<br />
<br />
$ rvm use <ruby_version>@<gemset_name> --create<br />
<br />
You can also specify a default gemset for a given ruby interpreter, by doing:<br />
<br />
$ rvm use <ruby_version>@<gemset_name> --default<br />
<br />
==== Using ====<br />
<br />
Tip: remove gems that reside in system priore RVM installation with:<br />
$ gem list --local | awk '{print "gem uninstall " $1}' | bash<br />
<br />
and check what's left:<br />
<br />
$ gem list --local<br />
<br />
To use a gemset:<br />
<br />
$ rvm gemset use <gemset_name><br />
<br />
You can switch to a gemset as you start to use a ruby, by appending @<gemset_name> to the end of the ruby selector string:<br />
<br />
$ rvm use <ruby_version>@<gemset_name><br />
<br />
===== Notes =====<br />
<br />
When you install a ruby environment, it comes with two gemsets out of the box, their names are ''default'' and ''global''. You will usually find in the latter some pre-installed common gems, while the former always starts empty. <br />
<br />
A little bit about where the default and global gemsets differ: When you do not use a gemset at all, you get the gems in the default set. If you use a specific gemset (say @testing), it will inherit gems from that ruby's @global. The global gemset is to allow you to share gems to all your gemsets.<br />
<br />
==== Gems! ====<br />
<br />
Within a gemset, you can utilize usual RubyGems commands<br />
$ gem install <gem><br />
to add,<br />
$ gem uninstall <gem><br />
to remove gems, and<br />
$ gem list<br />
to view installed ones.<br />
<br />
If you are deploying to a server, or you do not want to wait around for rdoc and ri to install for each gem, you can disable them for gem installs and updates. Just add these two lines to your ~/.gemrc or /etc/gemrc:<br />
<br />
install: --no-rdoc --no-ri<br />
update: --no-rdoc --no-ri<br />
<br />
==== Listing ====<br />
<br />
To see the name of the current gemset:<br />
<br />
$ rvm gemset name<br />
<br />
To list all named gemsets for the current ruby interpreter:<br />
<br />
$ rvm gemset list<br />
<br />
To list all named gemsets for all interpreters:<br />
<br />
$ rvm gemset list_all<br />
<br />
==== Deleting ====<br />
<br />
This action removes the current gemset:<br />
<br />
$ rvm gemset use <gemset_name><br />
$ rvm gemset delete <gemset_name><br />
<br />
By default, rvm deletes gemsets from the currently selected Ruby interpreter. To delete a gemset from a different interpreter, say 1.9.2, run your command this way:<br />
<br />
$ rvm 1.9.2 do gemset delete <gemset_name><br />
<br />
==== Emptying ====<br />
<br />
This action removes all gems installed in the gemset:<br />
<br />
$ rvm gemset use <gemset_name><br />
$ rvm gemset empty <gemset_name><br />
<br />
=== RVM ===<br />
<br />
==== Updating ====<br />
<br />
To upgrade to the most recent release version:<br />
<br />
$ rvm get latest<br />
<br />
Upgrading to the latest repository source version (the most bugfixes):<br />
<br />
$ rvm get head<br />
<br />
Remember to use rvmsudo for multi-user setups. Update often!<br />
<br />
==== Uninstalling ====<br />
<br />
Executing<br />
<br />
$ rvm implode<br />
<br />
is going to wipe out the RVM installation —cleanly—.<br />
<br />
=== Further Reading ===<br />
<br />
This is just a simple introduction to switching ruby versions with RVM and managing different set of gems in different environments. There is lots more that you can do with it! For more information, consult the very comprehensive RVM documentation. [https://rvm.beginrescueend.com/rvm/basics/ This page] is a good place to start.<br />
<br />
== Troubleshooting ==<br />
<br />
You will need to take care with rvm installations, since ArchLinux is very well updated, and some earlier ruby's patchlevels do not like it. RVM many times do not choose the latest patchlevel version to install, and you'll need to check manually on the [http://www.ruby-lang.org/en/news/ ruby website], and force RVM to install it.<br />
<br />
==== "data definition has no type or storage class" ====<br />
<br />
This appears to be specific to 1.8.7, but if you get this error while compiling the following steps will fix your problem:<br />
<br />
$ cd src/ruby-1.8.7-p334/ext/dl<br />
$ rm callback.func<br />
$ touch callback.func<br />
$ ruby mkcallback.rb >> callback.func<br />
$ rm cbtable.func<br />
$ touch cbtable.func<br />
$ ruby mkcbtable.rb >> cbtable.func<br />
<br />
Naturally, substitute the actual build path to your source, which will be something like ~/.rvm/src/.<br />
<br />
==== Ruby 1.8.x won't compile with RVM ====<br />
<br />
This is a known issue on Arch Linux, and is caused by a problem with openssl. Arch uses openssl 1.0, lower patchlevels of 1.8.7 assumes 0.9. <br />
<br />
Certain patch levels may not build (p352 for example), p299 should work fine and can be installed using the following command:<br />
<br />
$ rvm remove 1.8.7<br />
$ rvm install 1.8.7-p299<br />
<br />
Another approach is to install local openssl via RVM:<br />
<br />
$ rvm pkg install openssl<br />
$ rvm remove 1.8.7<br />
$ rvm install 1.8.7 -C --with-openssl-dir=$HOME/.rvm/usr<br />
<br />
It may be necessary to patch 1.8.7:<br />
<br />
$ wget http://redmine.ruby-lang.org/attachments/download/1931/stdout-rouge-fix.patch<br />
$ rvm remove 1.8.7<br />
$ rvm install --patch Downloads/stdout-rouge-fix.patch ruby-1.8.7-p352<br />
<br />
==== Ruby 1.9.1 won't compile with RVM ====<br />
<br />
Like with 1.8.x, earlier patchlevels do not like the OpenSSL 1.0. Then you can use the very same solution above, by installing openssl locally on RVM.<br />
<br />
$ rvm pkg install openssl<br />
$ rvm remove 1.9.1<br />
$ rvm install 1.9.1 -C --with-openssl-dir=$HOME/./rvm/usr<br />
<br />
The patchlevels >p378 have a problem with gem paths, when $GEM_HOME is set. The problem is known and fixed in 1.9.2. (http://redmine.ruby-lang.org/issues/3584). If you really need 1.9.1 please use p378.<br />
<br />
$ rvm install 1.9.1-p378 -C --with-openssl-dir=$HOME/.rvm/usr<br />
<br />
== See Also ==<br />
<br />
* [http://rvm.beginrescueend.com/ RVM project website].<br />
* [[RubyOnRails#Option_C:_The_Perfect_Rails_Setup|The Perfect Rails Setup]].</div>Thedgmhttps://wiki.archlinux.org/index.php?title=LVM_on_software_RAID&diff=239981LVM on software RAID2012-12-12T00:21:31Z<p>Thedgm: </p>
<hr />
<div>[[ru:Installing with Software RAID or LVM]]<br />
[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
{{Article summary start}}<br />
{{Article summary text|This article will provide an example of how to install and configure Arch Linux with a software RAID or Logical Volume Manager (LVM).}}<br />
{{Article summary heading|Required software}}<br />
{{Article summary link|Software|}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|RAID}}<br />
{{Article summary wiki|LVM}}<br />
{{Article summary wiki|Installing with Fake RAID}}<br />
{{Article summary wiki|Convert a single drive system to RAID}}<br />
{{Article summary end}}<br />
<br />
The combination of [[RAID]] and [[LVM]] provides numerous features with few caveats compared to just using RAID.<br />
<br />
== Introduction ==<br />
{{warning|Be sure to review the [[RAID]] article and be aware of all applicable warnings, particularly if you select RAID5.}}<br />
<br />
Although [[RAID]] and [[LVM]] may seem like analogous technologies they each present unique features. This article uses an example with three similar 1TB SATA hard drives. The article assumes that the drives are accessible as {{ic|/dev/sda}}, {{ic|/dev/sdb}}, and {{ic|/dev/sdc}}. If you are using IDE drives, for maximum performance make sure that each drive is a master on its own separate channel.<br />
<br />
{{tip|It is good practice to ensure that only the drives involved in the installation are attached while performing the installation.}}<br />
<br />
{| border="1" width="100%" style="text-align:center;"<br />
|width="150px" align="left" | '''LVM Logical Volumes'''<br />
|{{ic|/}}<br />
|{{ic|/var}}<br />
|{{ic|/swap}}<br />
|{{ic|/home}}<br />
|}<br />
{| border="1" width="100%" style="text-align:center;"<br />
|width="150px" align="left" | '''LVM Volume Groups'''<br />
|{{ic|/dev/VolGroupArray}}<br />
|}<br />
{| border="1" width="100%" style="text-align:center;"<br />
|width="150px" align="left" | '''RAID Arrays'''<br />
|{{ic|/dev/md0}}<br />
|{{ic|/dev/md1}}<br />
|}<br />
{| border="1" width="100%" style="text-align:center;"<br />
|width="150px" align="left" | '''Physical Partitions'''<br />
|{{ic|/dev/sda1}}<br />
|{{ic|/dev/sdb1}}<br />
|{{ic|/dev/sdc1}}<br />
|{{ic|/dev/sda2}}<br />
|{{ic|/dev/sdb2}}<br />
|{{ic|/dev/sdc2}}<br />
|}<br />
{| border="1" width="100%" style="text-align:center;"<br />
|width="150px" align="left" | '''Hard Drives'''<br />
|{{ic|/dev/sda}}<br />
|{{ic|/dev/sdb}}<br />
|{{ic|/dev/sdc}}<br />
|}<br />
<br />
=== Swap space ===<br />
{{note|If you want extra performance, just let the kernel use distinct swap partitions as it does striping by default.}}<br />
<br />
Many tutorials treat the swap space differently, either by creating a separate RAID1 array or a LVM logical volume. Creating the swap space on a separate array is not intended to provide additional redundancy, but instead, to prevent a corrupt swap space from rendering the system inoperable, which is more likely to happen when the swap space is located on the same partition as the root directory.<br />
<br />
=== MBR vs. GPT ===<br />
{{Wikipedia|GUID Partition Table}}<br />
The widespread [[Master Boot Record]] (MBR) partitioning scheme, dating from the early 1980s, imposed limitations which affected the use of modern hardware. [[GUID Partition Table]] (GPT) is a new standard for the layout of the partition table based on the [[Wikipedia:Unified Extensible Firmware Interface|UEFI]] specification derived from Intel. Although GPT provides a significant improvement over a MBR, it does require the additional step of creating an additional partition at the beginning of each disk for GRUB2 (see: [[GRUB2#GPT specific instructions|GPT specific instructions]]).<br />
<br />
=== Boot loader ===<br />
This tutorial will use [[Syslinux|SYSLINUX]] instead of [[GRUB2]]. GRUB2 when used in conjunction with [[GUID Partition Table|GPT]] requires an additional [[GRUB2#GPT specific instructions|BIOS Boot Partition]]. Additionally, the [[DeveloperWiki:2011.08.19|2011.08.19]] Arch Linux installer does not support GRUB2.<br />
<br />
GRUB2 supports the default style of metadata currently created by mdadm (i.e. 1.2) when combined with an initramfs, which has replaced in Arch Linux with [[mkinitcpio]]. SYSLINUX only supports version 1.0, and therefore requires the {{ic|<nowiki>--metadata=1.0</nowiki>}} option.<br />
<br />
Some boot loaders (e.g. [[GRUB]], [[LILO]]) will not support any 1.x metadata versions, and instead require the older version, 0.90. If you would like to use one of those boot loaders make sure to add the option {{ic|<nowiki>--metadata=0.90</nowiki>}} to the {{ic|/boot}} array during [[#RAID installation|RAID installation]].<br />
<br />
== Installation ==<br />
Obtain the latest installation media and boot the Arch Linux installer as outlined in the [[Beginners' Guide#Preparation|Beginners' Guide]], or alternatively, in the [[Official Arch Linux Install Guide#Pre-Installation|Official Arch Linux Install Guide]]. Follow the directions outlined there until you have [[Beginners Guide#Configure Network (netinstall)|configured your network]].<br />
<br />
==== Load kernel modules ====<br />
Enter another TTY terminal by typing {{Keypress|Alt}}+{{Keypress|F2}}. Load the appropriate RAID (e.g. {{ic|raid0}}, {{ic|raid1}}, {{ic|raid5}}, {{ic|raid6}}, {{ic|raid10}}) and LVM (i.e. {{ic|dm-mod}}) modules. The following example makes use of RAID1 and RAID5.<br />
# modprobe raid1<br />
# modprobe raid5<br />
# modprobe dm-mod<br />
<br />
=== Prepare the hard drives ===<br />
{{note|If your hard drives are already prepared and all you want to do is activate RAID and LVM jump to [[Installing_with_Software_RAID_or_LVM#Activate_existing_RAID_devices_and_LVM_volumes|Activate existing RAID devices and LVM volumes]]. This can be achieved with alternative partitioning software (see: [http://yannickloth.be/blog/2010/08/01/installing-archlinux-with-software-raid1-encrypted-filesystem-and-lvm2/ Article]).}}<br />
<br />
Each hard drive will have a 100MB {{ic|/boot}} partition, 2048MB {{ic|/swap}} partition, and a {{ic|/}} partition that takes up the remainder of the disk.<br />
<br />
The boot partition must be RAID1, because GRUB does not have RAID drivers. Any other level will prevent your system from booting. Additionally, if there is a problem with one boot partition, the boot loader can boot normally from the other two partitions in the {{ic|/boot}} array. Finally, the partition you boot from must not be striped (i.e. RAID5, RAID0).<br />
<br />
==== Install gdisk ====<br />
Since most disk partitioning software does not support GPT (i.e. {{Pkg|fdisk}}, {{Pkg|sfdisk}}) you will need to install {{Pkg|gptfdisk}} to set the partition type of the boot loader partitions.<br />
<br />
Update the [[pacman]] database:<br />
$ pacman-db-upgrade<br />
<br />
Refresh the package list:<br />
$ pacman -Syy<br />
<br />
Install {{Pkg|gptfdisk}}:<br />
$ pacman -S gdisk<br />
<br />
==== Partition hard drives ====<br />
We will use <code>gdisk</code> to create three partitions on each of the three hard drives (i.e. {{ic|/dev/sda}}, {{ic|/dev/sdb}}, {{ic|/dev/sdc}}):<br />
<br />
Name Flags Part Type FS Type [Label] Size (MB)<br />
-------------------------------------------------------------------------------<br />
sda1 Boot Primary linux_raid_m 100.00 # /boot<br />
sda2 Primary linux_raid_m 2000.00 # /swap<br />
sda3 Primary linux_raid_m 97900.00 # /<br />
<br />
Open {{ic|gdisk}} with the first hard drive:<br />
$ gdisk /dev/sda<br />
<br />
and type the following commands at the prompt:<br />
# Add a new partition: {{Keypress|n}}<br />
# Select the default partition number: {{Keypress|Enter}}<br />
# Use the default for the first sector: {{Keypress|Enter}}<br />
# For {{ic|sda1}} and {{ic|sda2}} type the appropriate size in MB (i.e. {{ic|+100MB}} and {{ic|+2048M}}). For {{ic|sda3}} just hit {{Keypress|Enter}} to select the remainder of the disk.<br />
# Select {{ic|Linux RAID}} as the partition type: {{ic|fd00}}<br />
# Write the table to disk and exit: {{Keypress|w}}<br />
<br />
Repeat this process for {{ic|/dev/sdb}} and {{ic|/dev/sdc}} or use the alternate {{ic|sgdisk}} method below. You may need to reboot to allow the kernel to recognize the new tables.<br />
<br />
{{note|Make sure to create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a RAID partition, it will work, but ''the redundant partition will be in multiples of the size of the smallest partition'', leaving the unallocated space to waste.}}<br />
<br />
==== Clone partitions with sgdisk ====<br />
If you are using GPT, then you can use {{ic|sgdisk}} to clone the partition table from {{ic|/dev/sda}} to the other two hard drives:<br />
$ sgdisk --backup=table /dev/sda<br />
$ sgdisk --load-backup=table /dev/sdb<br />
$ sgdisk --load-backup=table /dev/sdc<br />
<br />
=== RAID installation ===<br />
After creating the physical partitions, you are ready to setup the {{ic|/boot}}, {{ic|/swap}}, and {{ic|/}} arrays with {{ic|mdadm}}. It is an advanced tool for RAID management that will be used to create a {{ic|/etc/mdadm.conf}} within the installation environment.<br />
<br />
Create the {{ic|/}} array at {{ic|/dev/md0}}:<br />
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[abc]3<br />
<br />
Create the {{ic|/swap}} array at {{ic|/dev/md1}}:<br />
# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/sd[abc]2<br />
<br />
{{note|If you plan on installing a boot loader that does not support the 1.x version of RAID metadata make sure to add the {{ic|<nowiki>--metadata=0.90</nowiki>}} option to the following command.}}<br />
<br />
Create the {{ic|/boot}} array at {{ic|/dev/md2}}:<br />
# mdadm --create /dev/md2 --level=1 --raid-devices=3 --metadata=1.0 /dev/sd[abc]1<br />
<br />
==== Synchronization ====<br />
{{tip|If you want to avoid the initial resync with new hard drives add the {{ic|--assume-clean}} flag.}}<br />
<br />
After you create a RAID volume, it will synchronize the contents of the physical partitions within the array. You can monitor the progress by refreshing the output of {{ic|/proc/mdstat}} ten times per second with:<br />
# watch -n .1 cat /proc/mdstat<br />
<br />
{{tip|Follow the synchronization in another TTY terminal by typing {{Keypress|ALT}} + {{Keypress|F3}} and then execute the above command.}}<br />
<br />
Further information about the arrays is accessible with:<br />
# mdadm --misc --detail /dev/md[012] | less<br />
Once synchronization is complete the {{ic|State}} line should read {{ic|clean}}. Each device in the table at the bottom of the output should read {{ic|spare}} or {{ic|active sync}} in the {{ic|State}} column. {{ic|active sync}} means each device is actively in the array.<br />
<br />
{{note|Since the RAID synchronization is transparent to the file-system you can proceed with the installation and reboot your computer when necessary.}}<br />
<br />
=== LVM installation ===<br />
This section will convert the two RAIDs into physical volumes (PVs). Then combine those PVs into a volume group (VG). The VG will then be divided into logical volumes (LVs) that will act like physical partitions (e.g. {{ic|/}}, {{ic|/var}}, {{ic|/home}}). If you did not understand that make sure you read the [[LVM#Introduction|LVM Introduction]] section.<br />
<br />
==== Create physical volumes ====<br />
Make the RAIDs accessible to LVM by converting them into physical volumes (PVs):<br />
# pvcreate /dev/md0<br />
<br />
{{note|This might fail if you are creating PVs on an existing Volume Group. If so you might want to add {{ic|-ff}} option.}}<br />
<br />
Confirm that LVM has added the PVs with: <br />
# pvdisplay<br />
<br />
==== Create the volume group ====<br />
Next step is to create a volume group (VG) on the PVs.<br />
<br />
Create a volume group (VG) with the first PV:<br />
# vgcreate VolGroupArray /dev/md0<br />
<br />
Confirm that LVM has added the VG with: <br />
# vgdisplay<br />
<br />
==== Create logical volumes ====<br />
Now we need to create logical volumes (LVs) on the VG, much like we would normally [[Beginners Guide#Prepare Hard Drive|prepare a hard drive]]. In this example we will create separate {{ic|/}}, {{ic|/var}}, {{ic|/swap}}, {{ic|/home}} LVs. The LVs will be accessible as {{ic|/dev/mapper/VolGroupArray-<lvname>}} or {{ic|/dev/VolGroupArray/<lvname>}}.<br />
<br />
Create a {{ic|/}} LV:<br />
# lvcreate -L 20G VolGroupArray -n lvroot<br />
<br />
Create a {{ic|/var}} LV:<br />
# lvcreate -L 15G VolGroupArray -n lvvar<br />
<br />
{{note|If you would like to add the swap space to the LVM create a {{ic|/swap}} LV with the {{ic|-C y}} option, which creates a contiguous partition, so that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents:<br />
# lvcreate -C y -L 2G VolGroupArray -n lvswap<br />
}}<br />
<br />
Create a {{ic|/home}} LV that takes up the remainder of space in the VG:<br />
# lvcreate -l +100%FREE VolGroupArray -n lvhome<br />
<br />
Confirm that LVM has created the LVs with:<br />
# lvdisplay<br />
<br />
{{tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Update RAID configuration ===<br />
Since the installer builds the initrd using {{ic|/etc/mdadm.conf}} in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it contains comments on how to fill it correctly, and that is something mdadm can do automatically for you. So let us delete the original and have mdadm create you a new one with the current setup:<br />
# mdadm --examine --scan > /etc/mdadm.conf<br />
<br />
{{Note|Read the note in the [[RAID#Update configuration file|Update configuration file]] section about ensuring that you write to the correct {{ic|mdadm.conf}} file from within the installer.}}<br />
<br />
=== Prepare hard drive ===<br />
Follow the directions outlined the [[Beginners' Guide#Installation|Installation]] section until you reach the ''Prepare Hard Drive'' section. Skip the first two steps and navigate to the ''Manually Configure block devices, filesystems and mountpoints'' page. Remember to only configure the PVs (e.g. {{ic|/dev/mapper/VolGroupArray-lvhome}}) and '''not''' the actual disks (e.g. {{ic|/dev/sda1}}).<br />
<br />
{{warning|{{ic|mkfs.xfs}} will not align the chunk size and stripe size for optimum performance (see: [http://www.linuxpromagazine.com/Issues/2009/108/RAID-Performance Optimum RAID]).}}<br />
<br />
=== Configure system ===<br />
{{warning|Follow the steps in the [[Lvm#Important|LVM Important]] section before proceeding with the installation.}}<br />
<br />
==== /etc/mkinitcpio.conf ====<br />
[[mkinitcpio]] can use a hook to assemble the arrays on boot. For more information see [[mkinitcpio#Using RAID|mkinitpcio Using RAID]].<br />
# Add the {{ic|dm_mod}} module to the {{ic|MODULES}} list in {{ic|/etc/mkinitcpio.conf}}.<br />
# Add the {{ic|mdadm_udev}} and {{ic|lvm2}} hooks to the {{ic|HOOKS}} list in {{ic|/etc/mkinitcpio.conf}} after {{ic|udev}}.<br />
<br />
=== Conclusion ===<br />
Once it is complete you can safely reboot your machine:<br />
# reboot<br />
<br />
=== Install the bootloader on the Alternate Boot Drives===<br />
Once you have successfully booted your new system for the first time, you will want to install the bootloader onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from any of the remaining drives (e.g. by switching the boot order in the BIOS). The method depends on the bootloader system you're using:<br />
<br />
==== Syslinux ====<br />
Log in to your new system as root and do:<br />
# /usr/sbin/syslinux-install_update -iam<br />
<br />
{{Note|I don't run Syslinux, but reading through this install script looks like it will deal with installing the bootloader to the MBR on each of the members of the RAID array, if the array is mounted at /boot or /. Remove this note if you've tested it and it definitely installed the bootloader to each disk in the array.}}<br />
<br />
Syslinux will deal with installing the bootloader to the MBR on each of the members of the RAID array:<br />
Detected RAID on /boot - installing Syslinux with --raid<br />
Syslinux install successful<br />
<br />
Attribute Legacy Bios Bootable Set - /dev/sda1<br />
Attribute Legacy Bios Bootable Set - /dev/sdb1<br />
Installed MBR (/usr/lib/syslinux/gptmbr.bin) to /dev/sda<br />
Installed MBR (/usr/lib/syslinux/gptmbr.bin) to /dev/sdb<br />
<br />
==== Grub Legacy ====<br />
Log in to your new system as root and do:<br />
# grub<br />
grub> device (hd0) /dev/sdb<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> device (hd0) /dev/sdc<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
<br />
=== Archive your Filesystem Partition Scheme ===<br />
<br />
Now that you are done, it is worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the <code>sfdisk</code> tool and the following steps:<br />
# mkdir /etc/partitions<br />
# sfdisk --dump /dev/sda >/etc/partitions/disc0.partitions<br />
# sfdisk --dump /dev/sdb >/etc/partitions/disc1.partitions<br />
# sfdisk --dump /dev/sdc >/etc/partitions/disc2.partitions<br />
<br />
== Management ==<br />
For further information on how to maintain your software RAID or LVM review the [[RAID]] and [[LVM]] aritcles.<br />
<br />
== Additional Resources ==<br />
* [http://yannickloth.be/blog/2010/08/01/installing-archlinux-with-software-raid1-encrypted-filesystem-and-lvm2/ Setup Arch Linux on top of raid, LVM2 and encrypted partitions] by Yannick Loth<br />
* [http://stackoverflow.com/questions/237434/raid-verses-lvm RAID vs. LVM] on [[Wikipedia:Stack Overflow|Stack Overflow]]<br />
* [http://serverfault.com/questions/217666/what-is-better-lvm-on-raid-or-raid-on-lvm What is better LVM on RAID or RAID on LVM?] on [[Wikipedia:Server Fault|Server Fault]]<br />
* [http://www.gagme.com/greg/linux/raid-lvm.php Managing RAID and LVM with Linux (v0.5)] by Gregory Gulik<br />
* [http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml Gentoo Linux x86 with Software Raid and LVM2 Quick Install Guide]<br />
<br />
'''Forum threads'''<br />
* 2011-09-08 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=126172 LVM & RAID (1.2 metadata) + SYSLINUX]<br />
* 2011-04-20 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?pid=965357 Software RAID and LVM questions]<br />
* 2011-03-12 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=114965 Some newbie questions about installation, LVM, grub, RAID]</div>Thedgm