https://wiki.archlinux.org/api.php?action=feedcontributions&user=Plooms&feedformat=atomArchWiki - User contributions [en]2024-03-28T22:18:35ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Plasma&diff=359290Plasma2015-02-04T03:01:45Z<p>Plooms: /* Changing Style of "old" qt4-Applications */</p>
<hr />
<div>[[Category:KDE]]<br />
[[it:Plasma]]<br />
[[zh-CN:Plasma]]<br />
{{Related articles start}}<br />
{{Related|Desktop environment}}<br />
{{Related|KDE}}<br />
{{Related|Qt}}<br />
{{Related|Uniform Look for Qt and GTK Applications}}<br />
{{Related articles end}}<br />
{{Merge|KDE|Perhaps this page made sense in 2009 but currently there seems no point to having a separate page. It duplicates and complicates the KDE plasma section.}}<br />
Plasma is the component of the [[KDE]] project that actually displays the desktop (i.e. wallpapers, panels, etc) using 'containments'. The containments are capable of containing other widgets called plasmoids.<br />
<br />
== Installation ==<br />
{{Note|Plasma 5 is not coinstallable with KDE 4 Workspace. If you install it you will be prompted to remove kdebase-workspace.<br />
It's a good idea to remove it first and then install {{Pkg|plasma-meta}} or {{Grp|plasma}}.}}<br />
<br />
The Plasma 5 Desktop workspace is available in the [[official repositories]] as {{Pkg|plasma-meta}} or {{Grp|plasma}} group.<br />
<br />
== Starting Plasma ==<br />
<br />
Starting Plasma depends on your preferences. Basically there are two ways of starting Plasma. Using a '''display manager''' or '''xinitrc'''.<br />
<br />
=== Using a Display Manager ===<br />
A [[display manager]], or login manager, is typically a graphical user interface that is displayed at the end of the boot process in place of the default shell. It allows easily logging in straight to Plasma.<br />
Using [[SDDM]] as DM is recommended, as it provides better integration with the Plasma 5 theme.<br />
<br />
To launch a Plasma 5 session, choose "Plasma" in your [[display manager]] menu.<br />
<br />
=== Using xinitrc ===<br />
''See the [[xinitrc]] page for more information.''<br />
<br />
Add this line to your {{ic|.xinitrc}} file:<br />
<br />
{{hc|~/.xinitrc|<br />
exec startkde<br />
}}<br />
<br />
Execute ''startx'' or ''xinit'' to start Plasma.<br />
<br />
{{Note|If you want to start Xorg at boot, please read the [[Start X at login]] article.}}<br />
<br />
== Configuration ==<br />
<br />
[[Plasma]] is a desktop integration technology that provides many functions like displaying the wallpaper, adding widgets to the desktop, and handling the panel(s), or "taskbar(s)".<br />
<br />
=== Themes ===<br />
<br />
[http://kde-look.org/index.php?xcontentmode=76 Plasma themes] define the look of panels and plasmoids. For easy system-wide installation, some such themes are available in both the official repositories and the [https://aur.archlinux.org/packages.php?O=0&K=plasmatheme&do_Search=Go AUR].<br />
<br />
The easiest way to install themes is by going through the Desktop Settings control panel:<br />
<br />
Workspace Theme > Desktop Theme > Get new Themes<br />
<br />
This will present a nice frontend for [http://www.kde-look.org/ kde-look.org] that allows you to install, uninstall, or update third-party plasmoid scripts with literally just one click.<br />
<br />
=== Plasmoids ===<br />
<br />
The easiest way to install plasmoid scripts is by right-clicking onto a panel or the desktop:<br />
<br />
Add Widgets > Get new Widgets > Download Widgets<br />
<br />
This will present a nice frontend for [http://www.kde-look.org/ kde-look.org] that allows you to install, uninstall, or update third-party plasmoid scripts with literally just one click.<br />
<br />
== Tips and tricks==<br />
<br />
=== Decoupling the Dashboard from the Desktop - the plasma way ===<br />
<br />
click on top right cashew - zoom out - (new screen, look at new menu top left) configure plasma - use a separate dashboard<br />
<br />
=== Adding an OSX style or so-called "fancy" panel ===<br />
<br />
Right click on the desktop - add panel - fancy panel<br />
{{Warning|At the time of writing it is possible to edit a fancy panel extensively but it will not remember any settings.}}<br />
<br />
=== Having different wallpapers for each side of your cube ===<br />
<br />
click on top right cashew - zoom out - (new screen, look at new menu top left) configure plasma - different activity for each desktop<br />
<br />
=== Mixing desktop and folder view activities in one cube ===<br />
<br />
click on top right cashew - zoom out - (new screen, look at new menu top left) configure plasma - different activity for each desktop<br />
<br />
=== Changing Style of "old" qt4-Applications ===<br />
<br />
If the design of all qt4-based applications is not Breeze, than you can use <code>qtconfig-qt4</code> to fix it</div>Ploomshttps://wiki.archlinux.org/index.php?title=LVM&diff=307483LVM2014-03-28T10:52:29Z<p>Plooms: /* Configuration */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ja:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|System Encryption with LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LVM on LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LUKS on LVM}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Logical Volume Manager (Linux)]]:<br />
:LVM is a [[Wikipedia:logical volume management|logical volume manager]] for the [[Wikipedia:Linux kernel|Linux kernel]]; it manages disk drives and similar mass-storage devices.<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions independent of the underlying disk's layout. With LVM you abstract your storage and have "virtual partitions", making it easier to extend and shrink partitions (subject to potential limitations of your file system) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, getting caught up in fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way. This is strictly an ease-of-management issue: it does not provide any security. However, it sits nicely with the other two technologies we are using.<br />
<br />
The basic building blocks of LVM are:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even hard disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks which can be used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes that are used as storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': A small part of a disk (usually 4MiB) that can be assigned to a logical Volume. Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get more filled.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.<br />
<br />
These can be very helpful in a server situation, desktop less so, but you must decide if the features are worth the abstraction.<br />
<br />
=== Disadvantages ===<br />
<br />
* Linux exclusive (almost). There is no official support in most other OS (FreeBSD, Windows..).<br />
* Additional steps in setting up the system, more complicated.<br />
* If you use the [[Btrfs]] file system, its Subvolume feature will also give you the benefit of having a flexible layout. In that case, using the additional Abstraction layer of LVM may be unnecessary.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[Partitioning]] and [[File Systems#Format a device|formatting]] steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created inside a logical volume (LV). <br />
<br />
Make sure the {{pkg|lvm2}} package is [[pacman|installed]].<br />
<br />
Quick overview: <br />
* Create partition(s) where your PV will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PV). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all the PV to it.<br />
* Create logical volumes (LV) inside your VG.<br />
* Continue with “Format the partitions” step of [[Beginners' guide]].<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the {{ic|lvm}} hook to {{ic|/etc/mkinitcpio.conf}} (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate {{ic|/boot}} partition and format it directly. }}<br />
<br />
=== Create physical volumes ===<br />
<br />
Make sure you target the right partitions! To find the partitions with type 'Linux LVM':<br />
* MBR system: {{Ic|fdisk -l}}<br />
* GPT system: {{Ic|lsblk}} and then {{Ic|gdisk -l ''disk-device''}}<br />
<br />
Create a physical volume on them:<br />
# pvcreate ''disk-device''<br />
''disk-device'' may be e.g. {{ic|/dev/sda2}}.<br />
This command creates a header on each partition so it can be used for LVM.<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD, use {{ic|pvcreate --dataalignment 1m /dev/sda2}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
Next step is to create a volume group on this physical volume. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:<br />
# vgcreate VolGroup00 /dev/sda2<br />
# vgextend VolGroup00 /dev/sdb1<br />
Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
To create swap on a logical volume, an additional argument is needed:<br />
# lvcreate -C y -L 10G VolGroup00 -n lvolswap<br />
The {{Ic|-C y}} is used to create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents.<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia<br />
<br />
You can track created logical volumes with:<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed:}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create file systems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
Now you can create file systems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' guide#Mount the partitions|mounting the partitions]] for additional details):<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You will need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystem}} like so:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>HOOKS="base udev ... block lvm2 filesystems"</nowiki>}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow physical volume ===<br />
<br />
After changing the size of a physical volume (pv), e.g: growing a mdadm raid array, you need to grow the pv using the following command:<br />
<br />
# pvresize /dev/mdX<br />
<br />
{{Note|This command can be done while the volume is online}}<br />
<br />
=== Grow logical volume ===<br />
<br />
To grow a logical volume you first need to grow the logical volume and then the file system to use the newly created free space. Let us say we have a logical volume of 15 GB with ext3 on it, and we want to grow it to 20 GB. We need to do the following steps: <br />
# lvextend -L 20G VolGroup00/lvolhome (or lvresize -L +5G VolGroup00/lvolhome)<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
You may use {{Ic|lvresize}} instead of {{Ic|lvextend}}.<br />
<br />
If you want to fill all the free space on a volume group, use the next commands:<br />
# lvextend -l +100%FREE VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|Not all file systems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your file system, you will still have a volume with the same size as before (volume will be bigger but partly unused).}}<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your file system is probably as big as the logical volume it resides on, you need to shrink the file system first and then shrink the logical volume. Depending on your file system, you may need to unmount it first. Let us say we have a logical volume of 15 GB with ext3 on it and we want to shrink it to 10 GB. We need to do the following steps: <br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
Here we shrunk the file system more than needed so that when we shrunk the logical volume we did not accidentally cut off the end of the file system. After that, we normally grow the file system to fill all free space left on logical volume. You may use {{Ic|lvresize}} instead of {{Ic|lvreduce}}.<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the file system size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all file systems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
{{Note|It is better to reduce the file system to a smaller size than the logical volume, so that after resizing the logical volume, we do not accidentally cut off some data from the end of the file system.}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else; otherwise, it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes installed on the system with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint for your chosen logical volume...:<br />
<br />
$ df -h<br />
<br />
... and unmount it:<br />
<br />
# umount /your_mountpoint<br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove /dev/yourVG/yourLV<br />
<br />
Confirm by typing {{ic|y}} and you are done.<br />
<br />
Do not forget, to update {{ic|/etc/fstab}}!<br />
<br />
You can verify the removal of your logical volume by typing {{ic|lvs}} as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
<br />
=== Deactivate volume group ===<br />
<br />
Just invoke <br />
# vgchange -a n my_volume_group<br />
<br />
This will deactivate the volume group and allow you to unmount the container it is stored in.<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be done with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root file system snapshots during system startup for backup and rollback.<br />
<br />
[[Dm-crypt/Encrypting an Entire System#LVM on LUKS]] and [[Dm-crypt/Encrypting an Entire System#LUKS on LVM]].<br />
<br />
If you have LVM volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: (assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://www.gentoo.org/doc/en/lvm2.xml Gentoo LVM2 installation] guide at Gentoo documentation<br />
* [http://wiki.gentoo.org/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Ploomshttps://wiki.archlinux.org/index.php?title=LVM&diff=307482LVM2014-03-28T10:49:30Z<p>Plooms: /* Configuration */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ja:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|System Encryption with LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LVM on LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LUKS on LVM}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Logical Volume Manager (Linux)]]:<br />
:LVM is a [[Wikipedia:logical volume management|logical volume manager]] for the [[Wikipedia:Linux kernel|Linux kernel]]; it manages disk drives and similar mass-storage devices.<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions independent of the underlying disk's layout. With LVM you abstract your storage and have "virtual partitions", making it easier to extend and shrink partitions (subject to potential limitations of your file system) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, getting caught up in fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way. This is strictly an ease-of-management issue: it does not provide any security. However, it sits nicely with the other two technologies we are using.<br />
<br />
The basic building blocks of LVM are:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even hard disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks which can be used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes that are used as storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': A small part of a disk (usually 4MiB) that can be assigned to a logical Volume. Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get more filled.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.<br />
<br />
These can be very helpful in a server situation, desktop less so, but you must decide if the features are worth the abstraction.<br />
<br />
=== Disadvantages ===<br />
<br />
* Linux exclusive (almost). There is no official support in most other OS (FreeBSD, Windows..).<br />
* Additional steps in setting up the system, more complicated.<br />
* If you use the [[Btrfs]] file system, its Subvolume feature will also give you the benefit of having a flexible layout. In that case, using the additional Abstraction layer of LVM may be unnecessary.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[Partitioning]] and [[File Systems#Format a device|formatting]] steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created inside a logical volume (LV). <br />
<br />
Make sure the {{pkg|lvm2}} package is [[pacman|installed]].<br />
<br />
Quick overview: <br />
* Create partition(s) where your PV will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PV). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all the PV to it.<br />
* Create logical volumes (LV) inside your VG.<br />
* Continue with “Format the partitions” step of [[Beginners' guide]].<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the {{ic|lvm}} hook to {{ic|/etc/mkinitcpio.conf}} (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate {{ic|/boot}} partition and format it directly. }}<br />
<br />
=== Create physical volumes ===<br />
<br />
Make sure you target the right partitions! To find the partitions with type 'Linux LVM':<br />
* MBR system: {{Ic|fdisk -l}}<br />
* GPT system: {{Ic|lsblk}} and then {{Ic|gdisk -l ''disk-device''}}<br />
<br />
Create a physical volume on them:<br />
# pvcreate ''disk-device''<br />
''disk-device'' may be e.g. {{ic|/dev/sda2}}.<br />
This command creates a header on each partition so it can be used for LVM.<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD, use {{ic|pvcreate --dataalignment 1m /dev/sda2}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
Next step is to create a volume group on this physical volume. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:<br />
# vgcreate VolGroup00 /dev/sda2<br />
# vgextend VolGroup00 /dev/sdb1<br />
Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
To create swap on a logical volume, an additional argument is needed:<br />
# lvcreate -C y -L 10G VolGroup00 -n lvolswap<br />
The {{Ic|-C y}} is used to create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents.<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia<br />
<br />
You can track created logical volumes with:<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed:}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create file systems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
Now you can create file systems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' guide#Mount the partitions|mounting the partitions]] for additional details):<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You will need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystem}} like so:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>HOOKS="base udev ... block lvm2 filesystems"</nowiki>}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow physical volume ===<br />
<br />
After changing the size of a physical volume (pv), e.g: growing a mdadm raid array, you need to grow the pv using the following command, <br />
<br />
{{Ic|Note (can be done online)}}:<br />
# pvresize /dev/mdX<br />
<br />
=== Grow logical volume ===<br />
<br />
To grow a logical volume you first need to grow the logical volume and then the file system to use the newly created free space. Let us say we have a logical volume of 15 GB with ext3 on it, and we want to grow it to 20 GB. We need to do the following steps: <br />
# lvextend -L 20G VolGroup00/lvolhome (or lvresize -L +5G VolGroup00/lvolhome)<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
You may use {{Ic|lvresize}} instead of {{Ic|lvextend}}.<br />
<br />
If you want to fill all the free space on a volume group, use the next commands:<br />
# lvextend -l +100%FREE VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|Not all file systems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your file system, you will still have a volume with the same size as before (volume will be bigger but partly unused).}}<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your file system is probably as big as the logical volume it resides on, you need to shrink the file system first and then shrink the logical volume. Depending on your file system, you may need to unmount it first. Let us say we have a logical volume of 15 GB with ext3 on it and we want to shrink it to 10 GB. We need to do the following steps: <br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
Here we shrunk the file system more than needed so that when we shrunk the logical volume we did not accidentally cut off the end of the file system. After that, we normally grow the file system to fill all free space left on logical volume. You may use {{Ic|lvresize}} instead of {{Ic|lvreduce}}.<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the file system size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all file systems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
{{Note|It is better to reduce the file system to a smaller size than the logical volume, so that after resizing the logical volume, we do not accidentally cut off some data from the end of the file system.}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else; otherwise, it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes installed on the system with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint for your chosen logical volume...:<br />
<br />
$ df -h<br />
<br />
... and unmount it:<br />
<br />
# umount /your_mountpoint<br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove /dev/yourVG/yourLV<br />
<br />
Confirm by typing {{ic|y}} and you are done.<br />
<br />
Do not forget, to update {{ic|/etc/fstab}}!<br />
<br />
You can verify the removal of your logical volume by typing {{ic|lvs}} as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
<br />
=== Deactivate volume group ===<br />
<br />
Just invoke <br />
# vgchange -a n my_volume_group<br />
<br />
This will deactivate the volume group and allow you to unmount the container it is stored in.<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be done with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root file system snapshots during system startup for backup and rollback.<br />
<br />
[[Dm-crypt/Encrypting an Entire System#LVM on LUKS]] and [[Dm-crypt/Encrypting an Entire System#LUKS on LVM]].<br />
<br />
If you have LVM volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: (assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://www.gentoo.org/doc/en/lvm2.xml Gentoo LVM2 installation] guide at Gentoo documentation<br />
* [http://wiki.gentoo.org/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Ploomshttps://wiki.archlinux.org/index.php?title=LVM&diff=307481LVM2014-03-28T10:48:50Z<p>Plooms: /* Grow physical volume */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ja:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|System Encryption with LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LVM on LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LUKS on LVM}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Logical Volume Manager (Linux)]]:<br />
:LVM is a [[Wikipedia:logical volume management|logical volume manager]] for the [[Wikipedia:Linux kernel|Linux kernel]]; it manages disk drives and similar mass-storage devices.<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions independent of the underlying disk's layout. With LVM you abstract your storage and have "virtual partitions", making it easier to extend and shrink partitions (subject to potential limitations of your file system) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, getting caught up in fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way. This is strictly an ease-of-management issue: it does not provide any security. However, it sits nicely with the other two technologies we are using.<br />
<br />
The basic building blocks of LVM are:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even hard disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks which can be used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes that are used as storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': A small part of a disk (usually 4MiB) that can be assigned to a logical Volume. Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get more filled.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.<br />
<br />
These can be very helpful in a server situation, desktop less so, but you must decide if the features are worth the abstraction.<br />
<br />
=== Disadvantages ===<br />
<br />
* Linux exclusive (almost). There is no official support in most other OS (FreeBSD, Windows..).<br />
* Additional steps in setting up the system, more complicated.<br />
* If you use the [[Btrfs]] file system, its Subvolume feature will also give you the benefit of having a flexible layout. In that case, using the additional Abstraction layer of LVM may be unnecessary.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[Partitioning]] and [[File Systems#Format a device|formatting]] steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created inside a logical volume (LV). <br />
<br />
Make sure the {{pkg|lvm2}} package is [[pacman|installed]].<br />
<br />
Quick overview: <br />
* Create partition(s) where your PV will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PV). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all the PV to it.<br />
* Create logical volumes (LV) inside your VG.<br />
* Continue with “Format the partitions” step of [[Beginners' guide]].<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the {{ic|lvm}} hook to {{ic|/etc/mkinitcpio.conf}} (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate {{ic|/boot}} partition and format it directly. }}<br />
<br />
=== Create physical volumes ===<br />
<br />
Make sure you target the right partitions! To find the partitions with type 'Linux LVM':<br />
* MBR system: {{Ic|fdisk -l}}<br />
* GPT system: {{Ic|lsblk}} and then {{Ic|gdisk -l ''disk-device''}}<br />
<br />
Create a physical volume on them:<br />
# pvcreate ''disk-device''<br />
''disk-device'' may be e.g. {{ic|/dev/sda2}}.<br />
This command creates a header on each partition so it can be used for LVM.<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD, use {{ic|pvcreate --dataalignment 1m /dev/sda2}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
Next step is to create a volume group on this physical volume. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:<br />
# vgcreate VolGroup00 /dev/sda2<br />
# vgextend VolGroup00 /dev/sdb1<br />
Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
To create swap on a logical volume, an additional argument is needed:<br />
# lvcreate -C y -L 10G VolGroup00 -n lvolswap<br />
The {{Ic|-C y}} is used to create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents.<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia<br />
<br />
You can track created logical volumes with:<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed:}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create file systems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
Now you can create file systems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' guide#Mount the partitions|mounting the partitions]] for additional details):<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You will need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystem}} like so:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>HOOKS="base udev ... block lvm2 filesystems"</nowiki>}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow physical volume ===<br />
<br />
After changing the size of a physical volume (pv), e.g: growing a mdadm raid array, you need to grow the pv using the following command, <br />
<br />
Note (can be done online):<br />
# pvresize /dev/mdX<br />
<br />
=== Grow logical volume ===<br />
<br />
To grow a logical volume you first need to grow the logical volume and then the file system to use the newly created free space. Let us say we have a logical volume of 15 GB with ext3 on it, and we want to grow it to 20 GB. We need to do the following steps: <br />
# lvextend -L 20G VolGroup00/lvolhome (or lvresize -L +5G VolGroup00/lvolhome)<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
You may use {{Ic|lvresize}} instead of {{Ic|lvextend}}.<br />
<br />
If you want to fill all the free space on a volume group, use the next commands:<br />
# lvextend -l +100%FREE VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|Not all file systems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your file system, you will still have a volume with the same size as before (volume will be bigger but partly unused).}}<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your file system is probably as big as the logical volume it resides on, you need to shrink the file system first and then shrink the logical volume. Depending on your file system, you may need to unmount it first. Let us say we have a logical volume of 15 GB with ext3 on it and we want to shrink it to 10 GB. We need to do the following steps: <br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
Here we shrunk the file system more than needed so that when we shrunk the logical volume we did not accidentally cut off the end of the file system. After that, we normally grow the file system to fill all free space left on logical volume. You may use {{Ic|lvresize}} instead of {{Ic|lvreduce}}.<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the file system size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all file systems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
{{Note|It is better to reduce the file system to a smaller size than the logical volume, so that after resizing the logical volume, we do not accidentally cut off some data from the end of the file system.}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else; otherwise, it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes installed on the system with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint for your chosen logical volume...:<br />
<br />
$ df -h<br />
<br />
... and unmount it:<br />
<br />
# umount /your_mountpoint<br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove /dev/yourVG/yourLV<br />
<br />
Confirm by typing {{ic|y}} and you are done.<br />
<br />
Do not forget, to update {{ic|/etc/fstab}}!<br />
<br />
You can verify the removal of your logical volume by typing {{ic|lvs}} as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
<br />
=== Deactivate volume group ===<br />
<br />
Just invoke <br />
# vgchange -a n my_volume_group<br />
<br />
This will deactivate the volume group and allow you to unmount the container it is stored in.<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be done with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root file system snapshots during system startup for backup and rollback.<br />
<br />
[[Dm-crypt/Encrypting an Entire System#LVM on LUKS]] and [[Dm-crypt/Encrypting an Entire System#LUKS on LVM]].<br />
<br />
If you have LVM volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: (assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://www.gentoo.org/doc/en/lvm2.xml Gentoo LVM2 installation] guide at Gentoo documentation<br />
* [http://wiki.gentoo.org/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Ploomshttps://wiki.archlinux.org/index.php?title=LVM&diff=307480LVM2014-03-28T10:47:47Z<p>Plooms: /* Grow physical volume */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ja:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|System Encryption with LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LVM on LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LUKS on LVM}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Logical Volume Manager (Linux)]]:<br />
:LVM is a [[Wikipedia:logical volume management|logical volume manager]] for the [[Wikipedia:Linux kernel|Linux kernel]]; it manages disk drives and similar mass-storage devices.<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions independent of the underlying disk's layout. With LVM you abstract your storage and have "virtual partitions", making it easier to extend and shrink partitions (subject to potential limitations of your file system) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, getting caught up in fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way. This is strictly an ease-of-management issue: it does not provide any security. However, it sits nicely with the other two technologies we are using.<br />
<br />
The basic building blocks of LVM are:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even hard disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks which can be used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes that are used as storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': A small part of a disk (usually 4MiB) that can be assigned to a logical Volume. Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get more filled.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.<br />
<br />
These can be very helpful in a server situation, desktop less so, but you must decide if the features are worth the abstraction.<br />
<br />
=== Disadvantages ===<br />
<br />
* Linux exclusive (almost). There is no official support in most other OS (FreeBSD, Windows..).<br />
* Additional steps in setting up the system, more complicated.<br />
* If you use the [[Btrfs]] file system, its Subvolume feature will also give you the benefit of having a flexible layout. In that case, using the additional Abstraction layer of LVM may be unnecessary.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[Partitioning]] and [[File Systems#Format a device|formatting]] steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created inside a logical volume (LV). <br />
<br />
Make sure the {{pkg|lvm2}} package is [[pacman|installed]].<br />
<br />
Quick overview: <br />
* Create partition(s) where your PV will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PV). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all the PV to it.<br />
* Create logical volumes (LV) inside your VG.<br />
* Continue with “Format the partitions” step of [[Beginners' guide]].<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the {{ic|lvm}} hook to {{ic|/etc/mkinitcpio.conf}} (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate {{ic|/boot}} partition and format it directly. }}<br />
<br />
=== Create physical volumes ===<br />
<br />
Make sure you target the right partitions! To find the partitions with type 'Linux LVM':<br />
* MBR system: {{Ic|fdisk -l}}<br />
* GPT system: {{Ic|lsblk}} and then {{Ic|gdisk -l ''disk-device''}}<br />
<br />
Create a physical volume on them:<br />
# pvcreate ''disk-device''<br />
''disk-device'' may be e.g. {{ic|/dev/sda2}}.<br />
This command creates a header on each partition so it can be used for LVM.<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD, use {{ic|pvcreate --dataalignment 1m /dev/sda2}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
Next step is to create a volume group on this physical volume. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:<br />
# vgcreate VolGroup00 /dev/sda2<br />
# vgextend VolGroup00 /dev/sdb1<br />
Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
To create swap on a logical volume, an additional argument is needed:<br />
# lvcreate -C y -L 10G VolGroup00 -n lvolswap<br />
The {{Ic|-C y}} is used to create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents.<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia<br />
<br />
You can track created logical volumes with:<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed:}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create file systems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
Now you can create file systems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' guide#Mount the partitions|mounting the partitions]] for additional details):<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You will need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystem}} like so:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>HOOKS="base udev ... block lvm2 filesystems"</nowiki>}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow physical volume ===<br />
<br />
After changing the size of a physical volume (pv), e.g: growing a mdadm raid array, you need to grow the pv using the following command, <br />
<br />
Note: (can be done online)<br />
# pvresize /dev/mdX<br />
<br />
=== Grow logical volume ===<br />
<br />
To grow a logical volume you first need to grow the logical volume and then the file system to use the newly created free space. Let us say we have a logical volume of 15 GB with ext3 on it, and we want to grow it to 20 GB. We need to do the following steps: <br />
# lvextend -L 20G VolGroup00/lvolhome (or lvresize -L +5G VolGroup00/lvolhome)<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
You may use {{Ic|lvresize}} instead of {{Ic|lvextend}}.<br />
<br />
If you want to fill all the free space on a volume group, use the next commands:<br />
# lvextend -l +100%FREE VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|Not all file systems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your file system, you will still have a volume with the same size as before (volume will be bigger but partly unused).}}<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your file system is probably as big as the logical volume it resides on, you need to shrink the file system first and then shrink the logical volume. Depending on your file system, you may need to unmount it first. Let us say we have a logical volume of 15 GB with ext3 on it and we want to shrink it to 10 GB. We need to do the following steps: <br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
Here we shrunk the file system more than needed so that when we shrunk the logical volume we did not accidentally cut off the end of the file system. After that, we normally grow the file system to fill all free space left on logical volume. You may use {{Ic|lvresize}} instead of {{Ic|lvreduce}}.<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the file system size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all file systems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
{{Note|It is better to reduce the file system to a smaller size than the logical volume, so that after resizing the logical volume, we do not accidentally cut off some data from the end of the file system.}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else; otherwise, it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes installed on the system with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint for your chosen logical volume...:<br />
<br />
$ df -h<br />
<br />
... and unmount it:<br />
<br />
# umount /your_mountpoint<br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove /dev/yourVG/yourLV<br />
<br />
Confirm by typing {{ic|y}} and you are done.<br />
<br />
Do not forget, to update {{ic|/etc/fstab}}!<br />
<br />
You can verify the removal of your logical volume by typing {{ic|lvs}} as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
<br />
=== Deactivate volume group ===<br />
<br />
Just invoke <br />
# vgchange -a n my_volume_group<br />
<br />
This will deactivate the volume group and allow you to unmount the container it is stored in.<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be done with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root file system snapshots during system startup for backup and rollback.<br />
<br />
[[Dm-crypt/Encrypting an Entire System#LVM on LUKS]] and [[Dm-crypt/Encrypting an Entire System#LUKS on LVM]].<br />
<br />
If you have LVM volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: (assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://www.gentoo.org/doc/en/lvm2.xml Gentoo LVM2 installation] guide at Gentoo documentation<br />
* [http://wiki.gentoo.org/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Ploomshttps://wiki.archlinux.org/index.php?title=LVM&diff=307479LVM2014-03-28T10:46:51Z<p>Plooms: /* Grow physical volume */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ja:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|System Encryption with LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LVM on LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LUKS on LVM}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Logical Volume Manager (Linux)]]:<br />
:LVM is a [[Wikipedia:logical volume management|logical volume manager]] for the [[Wikipedia:Linux kernel|Linux kernel]]; it manages disk drives and similar mass-storage devices.<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions independent of the underlying disk's layout. With LVM you abstract your storage and have "virtual partitions", making it easier to extend and shrink partitions (subject to potential limitations of your file system) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, getting caught up in fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way. This is strictly an ease-of-management issue: it does not provide any security. However, it sits nicely with the other two technologies we are using.<br />
<br />
The basic building blocks of LVM are:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even hard disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks which can be used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes that are used as storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': A small part of a disk (usually 4MiB) that can be assigned to a logical Volume. Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get more filled.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.<br />
<br />
These can be very helpful in a server situation, desktop less so, but you must decide if the features are worth the abstraction.<br />
<br />
=== Disadvantages ===<br />
<br />
* Linux exclusive (almost). There is no official support in most other OS (FreeBSD, Windows..).<br />
* Additional steps in setting up the system, more complicated.<br />
* If you use the [[Btrfs]] file system, its Subvolume feature will also give you the benefit of having a flexible layout. In that case, using the additional Abstraction layer of LVM may be unnecessary.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[Partitioning]] and [[File Systems#Format a device|formatting]] steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created inside a logical volume (LV). <br />
<br />
Make sure the {{pkg|lvm2}} package is [[pacman|installed]].<br />
<br />
Quick overview: <br />
* Create partition(s) where your PV will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PV). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all the PV to it.<br />
* Create logical volumes (LV) inside your VG.<br />
* Continue with “Format the partitions” step of [[Beginners' guide]].<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the {{ic|lvm}} hook to {{ic|/etc/mkinitcpio.conf}} (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate {{ic|/boot}} partition and format it directly. }}<br />
<br />
=== Create physical volumes ===<br />
<br />
Make sure you target the right partitions! To find the partitions with type 'Linux LVM':<br />
* MBR system: {{Ic|fdisk -l}}<br />
* GPT system: {{Ic|lsblk}} and then {{Ic|gdisk -l ''disk-device''}}<br />
<br />
Create a physical volume on them:<br />
# pvcreate ''disk-device''<br />
''disk-device'' may be e.g. {{ic|/dev/sda2}}.<br />
This command creates a header on each partition so it can be used for LVM.<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD, use {{ic|pvcreate --dataalignment 1m /dev/sda2}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
Next step is to create a volume group on this physical volume. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:<br />
# vgcreate VolGroup00 /dev/sda2<br />
# vgextend VolGroup00 /dev/sdb1<br />
Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
To create swap on a logical volume, an additional argument is needed:<br />
# lvcreate -C y -L 10G VolGroup00 -n lvolswap<br />
The {{Ic|-C y}} is used to create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents.<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia<br />
<br />
You can track created logical volumes with:<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed:}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create file systems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
Now you can create file systems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' guide#Mount the partitions|mounting the partitions]] for additional details):<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You will need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystem}} like so:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>HOOKS="base udev ... block lvm2 filesystems"</nowiki>}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow physical volume ===<br />
<br />
After changing the size of a physical volume (pv) <br />
<br />
e.g: growing a mdadm raid array, you need to grow the pv using the following command, <br />
<br />
Note: (can be done online):<br />
# pvresize /dev/mdX<br />
<br />
=== Grow logical volume ===<br />
<br />
To grow a logical volume you first need to grow the logical volume and then the file system to use the newly created free space. Let us say we have a logical volume of 15 GB with ext3 on it, and we want to grow it to 20 GB. We need to do the following steps: <br />
# lvextend -L 20G VolGroup00/lvolhome (or lvresize -L +5G VolGroup00/lvolhome)<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
You may use {{Ic|lvresize}} instead of {{Ic|lvextend}}.<br />
<br />
If you want to fill all the free space on a volume group, use the next commands:<br />
# lvextend -l +100%FREE VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|Not all file systems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your file system, you will still have a volume with the same size as before (volume will be bigger but partly unused).}}<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your file system is probably as big as the logical volume it resides on, you need to shrink the file system first and then shrink the logical volume. Depending on your file system, you may need to unmount it first. Let us say we have a logical volume of 15 GB with ext3 on it and we want to shrink it to 10 GB. We need to do the following steps: <br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
Here we shrunk the file system more than needed so that when we shrunk the logical volume we did not accidentally cut off the end of the file system. After that, we normally grow the file system to fill all free space left on logical volume. You may use {{Ic|lvresize}} instead of {{Ic|lvreduce}}.<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the file system size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all file systems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
{{Note|It is better to reduce the file system to a smaller size than the logical volume, so that after resizing the logical volume, we do not accidentally cut off some data from the end of the file system.}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else; otherwise, it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes installed on the system with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint for your chosen logical volume...:<br />
<br />
$ df -h<br />
<br />
... and unmount it:<br />
<br />
# umount /your_mountpoint<br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove /dev/yourVG/yourLV<br />
<br />
Confirm by typing {{ic|y}} and you are done.<br />
<br />
Do not forget, to update {{ic|/etc/fstab}}!<br />
<br />
You can verify the removal of your logical volume by typing {{ic|lvs}} as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
<br />
=== Deactivate volume group ===<br />
<br />
Just invoke <br />
# vgchange -a n my_volume_group<br />
<br />
This will deactivate the volume group and allow you to unmount the container it is stored in.<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be done with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root file system snapshots during system startup for backup and rollback.<br />
<br />
[[Dm-crypt/Encrypting an Entire System#LVM on LUKS]] and [[Dm-crypt/Encrypting an Entire System#LUKS on LVM]].<br />
<br />
If you have LVM volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: (assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://www.gentoo.org/doc/en/lvm2.xml Gentoo LVM2 installation] guide at Gentoo documentation<br />
* [http://wiki.gentoo.org/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Ploomshttps://wiki.archlinux.org/index.php?title=LVM&diff=307478LVM2014-03-28T10:46:23Z<p>Plooms: /* Grow physical volume */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ja:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|System Encryption with LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LVM on LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LUKS on LVM}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Logical Volume Manager (Linux)]]:<br />
:LVM is a [[Wikipedia:logical volume management|logical volume manager]] for the [[Wikipedia:Linux kernel|Linux kernel]]; it manages disk drives and similar mass-storage devices.<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions independent of the underlying disk's layout. With LVM you abstract your storage and have "virtual partitions", making it easier to extend and shrink partitions (subject to potential limitations of your file system) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, getting caught up in fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way. This is strictly an ease-of-management issue: it does not provide any security. However, it sits nicely with the other two technologies we are using.<br />
<br />
The basic building blocks of LVM are:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even hard disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks which can be used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes that are used as storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': A small part of a disk (usually 4MiB) that can be assigned to a logical Volume. Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get more filled.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.<br />
<br />
These can be very helpful in a server situation, desktop less so, but you must decide if the features are worth the abstraction.<br />
<br />
=== Disadvantages ===<br />
<br />
* Linux exclusive (almost). There is no official support in most other OS (FreeBSD, Windows..).<br />
* Additional steps in setting up the system, more complicated.<br />
* If you use the [[Btrfs]] file system, its Subvolume feature will also give you the benefit of having a flexible layout. In that case, using the additional Abstraction layer of LVM may be unnecessary.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[Partitioning]] and [[File Systems#Format a device|formatting]] steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created inside a logical volume (LV). <br />
<br />
Make sure the {{pkg|lvm2}} package is [[pacman|installed]].<br />
<br />
Quick overview: <br />
* Create partition(s) where your PV will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PV). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all the PV to it.<br />
* Create logical volumes (LV) inside your VG.<br />
* Continue with “Format the partitions” step of [[Beginners' guide]].<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the {{ic|lvm}} hook to {{ic|/etc/mkinitcpio.conf}} (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate {{ic|/boot}} partition and format it directly. }}<br />
<br />
=== Create physical volumes ===<br />
<br />
Make sure you target the right partitions! To find the partitions with type 'Linux LVM':<br />
* MBR system: {{Ic|fdisk -l}}<br />
* GPT system: {{Ic|lsblk}} and then {{Ic|gdisk -l ''disk-device''}}<br />
<br />
Create a physical volume on them:<br />
# pvcreate ''disk-device''<br />
''disk-device'' may be e.g. {{ic|/dev/sda2}}.<br />
This command creates a header on each partition so it can be used for LVM.<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD, use {{ic|pvcreate --dataalignment 1m /dev/sda2}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
Next step is to create a volume group on this physical volume. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:<br />
# vgcreate VolGroup00 /dev/sda2<br />
# vgextend VolGroup00 /dev/sdb1<br />
Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
To create swap on a logical volume, an additional argument is needed:<br />
# lvcreate -C y -L 10G VolGroup00 -n lvolswap<br />
The {{Ic|-C y}} is used to create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents.<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia<br />
<br />
You can track created logical volumes with:<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed:}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create file systems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
Now you can create file systems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' guide#Mount the partitions|mounting the partitions]] for additional details):<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You will need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystem}} like so:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>HOOKS="base udev ... block lvm2 filesystems"</nowiki>}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow physical volume ===<br />
<br />
After changing the size of a physical volume (pv) <br />
e.g: growing a mdadm raid array, you need to grow the pv using the following command, Note: (can be done online):<br />
# pvresize /dev/mdX<br />
<br />
=== Grow logical volume ===<br />
<br />
To grow a logical volume you first need to grow the logical volume and then the file system to use the newly created free space. Let us say we have a logical volume of 15 GB with ext3 on it, and we want to grow it to 20 GB. We need to do the following steps: <br />
# lvextend -L 20G VolGroup00/lvolhome (or lvresize -L +5G VolGroup00/lvolhome)<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
You may use {{Ic|lvresize}} instead of {{Ic|lvextend}}.<br />
<br />
If you want to fill all the free space on a volume group, use the next commands:<br />
# lvextend -l +100%FREE VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|Not all file systems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your file system, you will still have a volume with the same size as before (volume will be bigger but partly unused).}}<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your file system is probably as big as the logical volume it resides on, you need to shrink the file system first and then shrink the logical volume. Depending on your file system, you may need to unmount it first. Let us say we have a logical volume of 15 GB with ext3 on it and we want to shrink it to 10 GB. We need to do the following steps: <br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
Here we shrunk the file system more than needed so that when we shrunk the logical volume we did not accidentally cut off the end of the file system. After that, we normally grow the file system to fill all free space left on logical volume. You may use {{Ic|lvresize}} instead of {{Ic|lvreduce}}.<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the file system size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all file systems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
{{Note|It is better to reduce the file system to a smaller size than the logical volume, so that after resizing the logical volume, we do not accidentally cut off some data from the end of the file system.}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else; otherwise, it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes installed on the system with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint for your chosen logical volume...:<br />
<br />
$ df -h<br />
<br />
... and unmount it:<br />
<br />
# umount /your_mountpoint<br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove /dev/yourVG/yourLV<br />
<br />
Confirm by typing {{ic|y}} and you are done.<br />
<br />
Do not forget, to update {{ic|/etc/fstab}}!<br />
<br />
You can verify the removal of your logical volume by typing {{ic|lvs}} as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
<br />
=== Deactivate volume group ===<br />
<br />
Just invoke <br />
# vgchange -a n my_volume_group<br />
<br />
This will deactivate the volume group and allow you to unmount the container it is stored in.<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be done with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root file system snapshots during system startup for backup and rollback.<br />
<br />
[[Dm-crypt/Encrypting an Entire System#LVM on LUKS]] and [[Dm-crypt/Encrypting an Entire System#LUKS on LVM]].<br />
<br />
If you have LVM volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: (assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://www.gentoo.org/doc/en/lvm2.xml Gentoo LVM2 installation] guide at Gentoo documentation<br />
* [http://wiki.gentoo.org/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Ploomshttps://wiki.archlinux.org/index.php?title=LVM&diff=307477LVM2014-03-28T10:45:20Z<p>Plooms: /* Grow physical volume */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ja:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|System Encryption with LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LVM on LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LUKS on LVM}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Logical Volume Manager (Linux)]]:<br />
:LVM is a [[Wikipedia:logical volume management|logical volume manager]] for the [[Wikipedia:Linux kernel|Linux kernel]]; it manages disk drives and similar mass-storage devices.<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions independent of the underlying disk's layout. With LVM you abstract your storage and have "virtual partitions", making it easier to extend and shrink partitions (subject to potential limitations of your file system) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, getting caught up in fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way. This is strictly an ease-of-management issue: it does not provide any security. However, it sits nicely with the other two technologies we are using.<br />
<br />
The basic building blocks of LVM are:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even hard disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks which can be used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes that are used as storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': A small part of a disk (usually 4MiB) that can be assigned to a logical Volume. Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get more filled.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.<br />
<br />
These can be very helpful in a server situation, desktop less so, but you must decide if the features are worth the abstraction.<br />
<br />
=== Disadvantages ===<br />
<br />
* Linux exclusive (almost). There is no official support in most other OS (FreeBSD, Windows..).<br />
* Additional steps in setting up the system, more complicated.<br />
* If you use the [[Btrfs]] file system, its Subvolume feature will also give you the benefit of having a flexible layout. In that case, using the additional Abstraction layer of LVM may be unnecessary.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[Partitioning]] and [[File Systems#Format a device|formatting]] steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created inside a logical volume (LV). <br />
<br />
Make sure the {{pkg|lvm2}} package is [[pacman|installed]].<br />
<br />
Quick overview: <br />
* Create partition(s) where your PV will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PV). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all the PV to it.<br />
* Create logical volumes (LV) inside your VG.<br />
* Continue with “Format the partitions” step of [[Beginners' guide]].<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the {{ic|lvm}} hook to {{ic|/etc/mkinitcpio.conf}} (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate {{ic|/boot}} partition and format it directly. }}<br />
<br />
=== Create physical volumes ===<br />
<br />
Make sure you target the right partitions! To find the partitions with type 'Linux LVM':<br />
* MBR system: {{Ic|fdisk -l}}<br />
* GPT system: {{Ic|lsblk}} and then {{Ic|gdisk -l ''disk-device''}}<br />
<br />
Create a physical volume on them:<br />
# pvcreate ''disk-device''<br />
''disk-device'' may be e.g. {{ic|/dev/sda2}}.<br />
This command creates a header on each partition so it can be used for LVM.<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD, use {{ic|pvcreate --dataalignment 1m /dev/sda2}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
Next step is to create a volume group on this physical volume. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:<br />
# vgcreate VolGroup00 /dev/sda2<br />
# vgextend VolGroup00 /dev/sdb1<br />
Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
To create swap on a logical volume, an additional argument is needed:<br />
# lvcreate -C y -L 10G VolGroup00 -n lvolswap<br />
The {{Ic|-C y}} is used to create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents.<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia<br />
<br />
You can track created logical volumes with:<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed:}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create file systems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
Now you can create file systems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' guide#Mount the partitions|mounting the partitions]] for additional details):<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You will need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystem}} like so:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>HOOKS="base udev ... block lvm2 filesystems"</nowiki>}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow physical volume ===<br />
<br />
After changing the size of a physical volume (pv) <br />
e.g: growing a mdadm raid array, you need to grow the pv using the command (can be done online):<br />
# pvresize /dev/mdX<br />
<br />
=== Grow logical volume ===<br />
<br />
To grow a logical volume you first need to grow the logical volume and then the file system to use the newly created free space. Let us say we have a logical volume of 15 GB with ext3 on it, and we want to grow it to 20 GB. We need to do the following steps: <br />
# lvextend -L 20G VolGroup00/lvolhome (or lvresize -L +5G VolGroup00/lvolhome)<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
You may use {{Ic|lvresize}} instead of {{Ic|lvextend}}.<br />
<br />
If you want to fill all the free space on a volume group, use the next commands:<br />
# lvextend -l +100%FREE VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|Not all file systems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your file system, you will still have a volume with the same size as before (volume will be bigger but partly unused).}}<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your file system is probably as big as the logical volume it resides on, you need to shrink the file system first and then shrink the logical volume. Depending on your file system, you may need to unmount it first. Let us say we have a logical volume of 15 GB with ext3 on it and we want to shrink it to 10 GB. We need to do the following steps: <br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
Here we shrunk the file system more than needed so that when we shrunk the logical volume we did not accidentally cut off the end of the file system. After that, we normally grow the file system to fill all free space left on logical volume. You may use {{Ic|lvresize}} instead of {{Ic|lvreduce}}.<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the file system size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all file systems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
{{Note|It is better to reduce the file system to a smaller size than the logical volume, so that after resizing the logical volume, we do not accidentally cut off some data from the end of the file system.}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else; otherwise, it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes installed on the system with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint for your chosen logical volume...:<br />
<br />
$ df -h<br />
<br />
... and unmount it:<br />
<br />
# umount /your_mountpoint<br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove /dev/yourVG/yourLV<br />
<br />
Confirm by typing {{ic|y}} and you are done.<br />
<br />
Do not forget, to update {{ic|/etc/fstab}}!<br />
<br />
You can verify the removal of your logical volume by typing {{ic|lvs}} as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
<br />
=== Deactivate volume group ===<br />
<br />
Just invoke <br />
# vgchange -a n my_volume_group<br />
<br />
This will deactivate the volume group and allow you to unmount the container it is stored in.<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be done with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root file system snapshots during system startup for backup and rollback.<br />
<br />
[[Dm-crypt/Encrypting an Entire System#LVM on LUKS]] and [[Dm-crypt/Encrypting an Entire System#LUKS on LVM]].<br />
<br />
If you have LVM volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: (assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://www.gentoo.org/doc/en/lvm2.xml Gentoo LVM2 installation] guide at Gentoo documentation<br />
* [http://wiki.gentoo.org/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Ploomshttps://wiki.archlinux.org/index.php?title=LVM&diff=307476LVM2014-03-28T10:37:31Z<p>Plooms: Add section about growing physical volumes</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ja:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|System Encryption with LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LVM on LUKS}}<br />
{{Related|dm-crypt/Encrypting an Entire System#LUKS on LVM}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Logical Volume Manager (Linux)]]:<br />
:LVM is a [[Wikipedia:logical volume management|logical volume manager]] for the [[Wikipedia:Linux kernel|Linux kernel]]; it manages disk drives and similar mass-storage devices.<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions independent of the underlying disk's layout. With LVM you abstract your storage and have "virtual partitions", making it easier to extend and shrink partitions (subject to potential limitations of your file system) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, getting caught up in fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way. This is strictly an ease-of-management issue: it does not provide any security. However, it sits nicely with the other two technologies we are using.<br />
<br />
The basic building blocks of LVM are:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even hard disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks which can be used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes that are used as storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': A small part of a disk (usually 4MiB) that can be assigned to a logical Volume. Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get more filled.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.<br />
<br />
These can be very helpful in a server situation, desktop less so, but you must decide if the features are worth the abstraction.<br />
<br />
=== Disadvantages ===<br />
<br />
* Linux exclusive (almost). There is no official support in most other OS (FreeBSD, Windows..).<br />
* Additional steps in setting up the system, more complicated.<br />
* If you use the [[Btrfs]] file system, its Subvolume feature will also give you the benefit of having a flexible layout. In that case, using the additional Abstraction layer of LVM may be unnecessary.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[Partitioning]] and [[File Systems#Format a device|formatting]] steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created inside a logical volume (LV). <br />
<br />
Make sure the {{pkg|lvm2}} package is [[pacman|installed]].<br />
<br />
Quick overview: <br />
* Create partition(s) where your PV will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PV). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all the PV to it.<br />
* Create logical volumes (LV) inside your VG.<br />
* Continue with “Format the partitions” step of [[Beginners' guide]].<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the {{ic|lvm}} hook to {{ic|/etc/mkinitcpio.conf}} (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate {{ic|/boot}} partition and format it directly. }}<br />
<br />
=== Create physical volumes ===<br />
<br />
Make sure you target the right partitions! To find the partitions with type 'Linux LVM':<br />
* MBR system: {{Ic|fdisk -l}}<br />
* GPT system: {{Ic|lsblk}} and then {{Ic|gdisk -l ''disk-device''}}<br />
<br />
Create a physical volume on them:<br />
# pvcreate ''disk-device''<br />
''disk-device'' may be e.g. {{ic|/dev/sda2}}.<br />
This command creates a header on each partition so it can be used for LVM.<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD, use {{ic|pvcreate --dataalignment 1m /dev/sda2}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
Next step is to create a volume group on this physical volume. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:<br />
# vgcreate VolGroup00 /dev/sda2<br />
# vgextend VolGroup00 /dev/sdb1<br />
Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
To create swap on a logical volume, an additional argument is needed:<br />
# lvcreate -C y -L 10G VolGroup00 -n lvolswap<br />
The {{Ic|-C y}} is used to create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents.<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia<br />
<br />
You can track created logical volumes with:<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed:}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create file systems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
Now you can create file systems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' guide#Mount the partitions|mounting the partitions]] for additional details):<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You will need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystem}} like so:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>HOOKS="base udev ... block lvm2 filesystems"</nowiki>}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow physical volume ===<br />
<br />
After changing the size of a physical volume (pv) e.g: growing a mdadm raid array, you need to grow the pv using:<br />
# pvresize /dev/mdX<br />
<br />
=== Grow logical volume ===<br />
<br />
To grow a logical volume you first need to grow the logical volume and then the file system to use the newly created free space. Let us say we have a logical volume of 15 GB with ext3 on it, and we want to grow it to 20 GB. We need to do the following steps: <br />
# lvextend -L 20G VolGroup00/lvolhome (or lvresize -L +5G VolGroup00/lvolhome)<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
You may use {{Ic|lvresize}} instead of {{Ic|lvextend}}.<br />
<br />
If you want to fill all the free space on a volume group, use the next commands:<br />
# lvextend -l +100%FREE VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|Not all file systems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your file system, you will still have a volume with the same size as before (volume will be bigger but partly unused).}}<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your file system is probably as big as the logical volume it resides on, you need to shrink the file system first and then shrink the logical volume. Depending on your file system, you may need to unmount it first. Let us say we have a logical volume of 15 GB with ext3 on it and we want to shrink it to 10 GB. We need to do the following steps: <br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
Here we shrunk the file system more than needed so that when we shrunk the logical volume we did not accidentally cut off the end of the file system. After that, we normally grow the file system to fill all free space left on logical volume. You may use {{Ic|lvresize}} instead of {{Ic|lvreduce}}.<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the file system size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all file systems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
{{Note|It is better to reduce the file system to a smaller size than the logical volume, so that after resizing the logical volume, we do not accidentally cut off some data from the end of the file system.}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else; otherwise, it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes installed on the system with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint for your chosen logical volume...:<br />
<br />
$ df -h<br />
<br />
... and unmount it:<br />
<br />
# umount /your_mountpoint<br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove /dev/yourVG/yourLV<br />
<br />
Confirm by typing {{ic|y}} and you are done.<br />
<br />
Do not forget, to update {{ic|/etc/fstab}}!<br />
<br />
You can verify the removal of your logical volume by typing {{ic|lvs}} as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
<br />
=== Deactivate volume group ===<br />
<br />
Just invoke <br />
# vgchange -a n my_volume_group<br />
<br />
This will deactivate the volume group and allow you to unmount the container it is stored in.<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be done with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root file system snapshots during system startup for backup and rollback.<br />
<br />
[[Dm-crypt/Encrypting an Entire System#LVM on LUKS]] and [[Dm-crypt/Encrypting an Entire System#LUKS on LVM]].<br />
<br />
If you have LVM volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: (assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://www.gentoo.org/doc/en/lvm2.xml Gentoo LVM2 installation] guide at Gentoo documentation<br />
* [http://wiki.gentoo.org/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Ploomshttps://wiki.archlinux.org/index.php?title=ISCSI/LIO&diff=304638ISCSI/LIO2014-03-15T21:08:58Z<p>Plooms: </p>
<hr />
<div>[[Category:Storage]]<br />
[[Category:Networking]]<br />
{{Article summary start}}<br />
{{Article summary text|How to set up an iSCSI Target using different tools.}}<br />
{{Article summary heading|Series}}<br />
{{Article summary wiki|iSCSI Target}}<br />
{{Article summary wiki|iSCSI Initiator}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|iSCSI Boot}}<br />
{{Article summary end}}<br />
<br />
With [[Wikipedia:iSCSI]] you can access storage over an IP-based network.<br />
<br />
The exported storage entity is the '''target''' and the importing entity is the '''[[iSCSI Initiator|initiator]]'''.<br />
<br />
There are different modules available to set up the target.<br />
The [http://stgt.berlios.de/ SCSI Target Framework (STGT/TGT)] was the standard before linux 2.6.38.<br />
The current standard is the [http://linux-iscsi.org/ LIO target].<br />
The [http://iscsitarget.sourceforge.net/ iSCSI Enterprise Target (IET)] is an old implementation and [http://scst.sourceforge.net/ SCSI Target Subsystem (SCST)] is the successor of IET and was a possible candidate for kernel inclusion before the decision fell for LIO.<br />
<br />
There are packages available for LIO, STGT and IET in the [[AUR]] (see below).<br />
<br />
== Setup with LIO Target ==<br />
LIO target is included in the kernel since 2.6.38. However, the iSCSI target fabric is included since linux 3.1.<br />
<br />
The important kernel modules are ''target_core_mod'' and ''iscsi_target_mod'', which should be in the kernel and loaded automatically.<br />
<br />
It is highly recommended to use the free branch versions of the packages: {{AUR|targetcli-fb}}, {{AUR|python-rtslib-fb}} and {{AUR|python-configshell-fb}}. The original {{AUR|targetcli}} is also available but has a different way of saving the configuration using the deprecated ''lio-utils'' and depends on ''epydoc''.<br />
<br />
A systemd {{ic|target.service}} is included in {{AUR|python3-rtslib-fb}} when you use the free branch and a {{ic|/etc/rc.d/target}} in {{AUR|lio-utils}} when you use the original ''targetcli'' or ''lio-utils'' directly.<br />
<br />
You start LIO target with {{bc|# systemctl start target}}<br />
This will load necessary modules, mount the configfs and load previously saved iscsi target configuration.<br />
<br />
With {{bc|# targetcli status}} you can show some information about the running configuration (only with the free branch).<br />
<br />
You might want to enable the lio target on boot with {{bc|# systemctl enable target}}.<br />
<br />
You can use '''targetcli''' to create the whole configuration or you can alternatively use the '''lio utils''' tcm_* and lio_* directly (deprecated).<br />
<br />
=== Using targetcli ===<br />
The external manual is only available in the ''free branch''. [https://github.com/agrover/targetd targetd] is not in AUR yet, but this depends on the free branch.<br />
<br />
The config shell creates most names and numbers for you automatically, but you can also provide your own settings.<br />
At any point in the shell you can type {{ic|help}} in order to see what commands you can issue here.<br />
{{Tip|You can use tab-completion in this shell}}<br />
{{Tip|You can type {{ic|cd}} in this shell to view & select paths}}<br />
<br />
After starting the target (see above) you enter the configuration shell with {{bc|# targetcli}}<br />
In this shell you include a block device (here: {{ic|/dev/disk/by-id/md-name-nas:iscsi}}) to use with<br />
{{bc|/> cd backstores/block<br>/backstores/block> create md_block0 /dev/disk/by-id/md-name-nas:iscsi}}<br />
{{Note|You can use any block device, also raid and lvm devices. You can also use files when you go to fileio instead of block.}}<br />
<br />
You then create an iSCSI Qualified Name (iqn) and a target portal group (tpg) with {{bc|...> cd /iscsi<br>/iscsi> create}}<br />
{{Note|With appending an iqn of your choice to {{ic|create}} you can keep targetcli from automatically creating an iqn}}<br />
<br />
In order to tell LIO that your block device should get used as ''backstore'' for the target you issue<br />
{{Note|Remember that you can type {{ic|cd}} to select the path of your <iqn>/tpg1}}<br />
{{bc|.../tpg1> cd luns<br>.../tpg1/luns> create /backstores/block/md_block0}}<br />
<br />
Then you need to create a ''portal'', making a daemon listen for incoming connections:<br />
{{bc|.../luns/lun0> cd ../../portals<br>.../portals> create}}<br />
Targetcli will tell you the IP and port where LIO is listening for incoming connections (defaults to 0.0.0.0 (all)).<br />
You will need at least the IP for the clients. The port should be the standard port 3260.<br />
<br />
In order for a client/[[iSCSI Initiator|initiator]] to connect you need to include the iqn of the initiator in the target configuration:<br />
{{bc|...> cd ../../acls<br>.../acls> create iqn.2005-03.org.open-iscsi:SERIAL}}<br />
Instead of {{ic|iqn.2005-03.org.open-iscsi:SERIAL}} you use the iqn of an initiator.<br />
It can normally be found in {{ic|/etc/iscsi/initiatorname.iscsi}}.<br />
You have to do this for every initiator that needs to connect.<br />
Targetcli will automatically map the created lun to the newly created acl.<br />
{{Note|You can change the mapped luns and whether the access should be rw or ro. See {{ic|help create}} at this point in the targetcli shell.}}<br />
<br />
The last thing you have to do in targetcli when everything works is saving the configuration with:<br />
...> cd /<br />
/> saveconfig<br />
The will the configuration in {{ic|/etc/target/saveconfig.json}}.<br />
You can now safely start and stop {{ic|target.service}} without losing your configuration.<br />
{{Tip|You can give a filename as a parameter to {{ic|saveconfig}} and also clear a configuration with {{ic|clearconfig}}}}<br />
<br />
==== Authentication ====<br />
Authentication per CHAP is enabled per default for your targets.<br />
You can either setup passwords or disable this authentication.<br />
<br />
===== Disable Authentication =====<br />
Navigate targetcli to your target (i.e. /iscsi/iqn.../tpg1) and<br />
.../tpg1> set attribute authentication=0<br />
{{Warning|With this setting everybody that knows the iqn of one of your clients (initiators) can access the target. This is for testing or home purposes only.}}<br />
===== Set Credentials =====<br />
Navigate to a certain acl of your target (i.e. /iscsi/iqn.../tpg1/acls/iqn.../) and<br />
...> get auth<br />
will show you the current authentication credentials.<br />
...> set auth userid=foo<br />
...> set auth password=bar<br />
Would enable authentication with foo:bar.<br />
<br />
=== Using (plain) LIO utils ===<br />
You have to install {{AUR|lio-utils}} from [[AUR]] and the dependencies (python2).<br />
<br />
=== Tips & Tricks ===<br />
* With {{ic|targetcli sessions}} you can list the current open sessions. This command is included in the {{AUR|targetcli-fb}} package, but not in ''lio-utils'' or the original ''targetcli''.<br />
<br />
=== Upstream Documentation ===<br />
* [http://www.linux-iscsi.org/wiki/Targetcli targetcli]<br />
* [http://www.linux-iscsi.org/wiki/Lio-utils_HOWTO LIO utils]<br />
* You can also use {{ic|man targetcli}} when you installed the ''free branch'' version {{AUR|targetcli-fb}}.<br />
<br />
== Setup with SCSI Target Framework (STGT/TGT) ==<br />
You will need the Package {{AUR|tgt}} from [[AUR]].<br />
<br />
See: [[TGT iSCSI Target]]<br />
<br />
== Setup with iSCSI Enterprise Target (IET) ==<br />
You will need {{AUR|iscsitarget-kernel}} and {{AUR|iscsitarget-usr}} from [[AUR]].<br />
<br />
=== Create the Target === <br />
Modify /etc/iet/ietd.conf accordingly<br />
<br />
==== Hard Drive Target ====<br />
Target iqn.2010-06.ServerName:desc<br />
Lun 0 Path=/dev/sdX,Type=blockio<br />
<br />
==== File based Target ====<br />
Use "dd" to create a file of the required size, this example is 10GB.<br />
<br />
dd if=/dev/zero of=/root/os.img bs=1G count=10<br />
<br />
Target iqn.2010-06.ServerName:desc<br />
Lun 0 Path=/root/os.img,Type=fileio<br />
<br />
=== Start server services ===<br />
rc.d start iscsi-target<br />
<br />
Also you can "iscsi-target" to DAEMONS in /etc/rc.conf so that it starts up during boot.<br />
<br />
== See also ==<br />
* [[iSCSI Boot]] Booting Arch Linux with / on an iSCSI target.<br />
* [[Persistent block device naming]] in order to use the correct block device for a target</div>Ploomshttps://wiki.archlinux.org/index.php?title=ISCSI/LIO&diff=304629ISCSI/LIO2014-03-15T19:00:26Z<p>Plooms: /* Setup with LIO Target */</p>
<hr />
<div>[[Category:Storage]]<br />
[[Category:Networking]]<br />
{{Article summary start}}<br />
{{Article summary text|How to set up an iSCSI Target using different tools.}}<br />
{{Article summary heading|Series}}<br />
{{Article summary wiki|iSCSI Target}}<br />
{{Article summary wiki|iSCSI Initiator}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|iSCSI Boot}}<br />
{{Article summary end}}<br />
<br />
With [[Wikipedia:iSCSI]] you can access storage over an IP-based network.<br />
<br />
The exported storage entity is the '''target''' and the importing entity is the '''[[iSCSI Initiator|initiator]]'''.<br />
<br />
There are different modules available to set up the target.<br />
The [http://stgt.berlios.de/ SCSI Target Framework (STGT/TGT)] was the standard before linux 2.6.38.<br />
The current standard is the [http://linux-iscsi.org/ LIO target].<br />
The [http://iscsitarget.sourceforge.net/ iSCSI Enterprise Target (IET)] is an old implementation and [http://scst.sourceforge.net/ SCSI Target Subsystem (SCST)] is the successor of IET and was a possible candidate for kernel inclusion before the decision fell for LIO.<br />
<br />
There are packages available for LIO, STGT and IET in the [[AUR]] (see below).<br />
<br />
== Setup with LIO Target ==<br />
LIO target is included in the kernel since 2.6.38. However, the iSCSI target fabric is included since linux 3.1.<br />
<br />
The important kernel modules are ''target_core_mod'' and ''iscsi_target_mod'', which should be in the kernel and loaded automatically.<br />
<br />
It is highly recommended to use the free branch versions of the packages: {{AUR|targetcli-fb}}, {{AUR|python-rtslib-fb}} and {{AUR|python-configshell-fb}}. The original {{AUR|targetcli}} is also available but has a different way of saving the configuration using the deprecated ''lio-utils'' and depends on ''epydoc''.<br />
<br />
A systemd {{ic|target.service}} is included in {{AUR|python3-rtslib-fb}} when you use the free branch and a {{ic|/etc/rc.d/target}} in {{AUR|lio-utils}} when you use the original ''targetcli'' or ''lio-utils'' directly.<br />
<br />
You start LIO target with {{bc|# systemctl start target}}<br />
This will load necessary modules, mount the configfs and load previously saved iscsi target configuration.<br />
<br />
With {{bc|# targetcli status}} you can show some information about the running configuration (only with the free branch).<br />
<br />
You might want to enable the lio target on boot with {{bc|# systemctl enable target}}.<br />
<br />
You can use '''targetcli''' to create the whole configuration or you can alternatively use the '''lio utils''' tcm_* and lio_* directly (deprecated).<br />
<br />
=== Using targetcli ===<br />
The external manual is only available in the ''free branch''. [https://github.com/agrover/targetd targetd] is not in AUR yet, but this depends on the free branch.<br />
<br />
The config shell creates most names and numbers for you automatically, but you can also provide your own settings.<br />
At any point in the shell you can type {{ic|help}} in order to see what commands you can issue here.<br />
{{Tip|You can use tab-completion in this shell}}<br />
{{Tip|You can type "cd" in this shell to view all paths}}<br />
<br />
After starting the target (see above) you enter the configuration shell with {{bc|# targetcli}}<br />
In this shell you include a block device (here: {{ic|/dev/disk/by-id/md-name-nas:iscsi}}) to use with<br />
{{bc|/> cd backstores/block<br>/backstores/block> create md_block0 /dev/disk/by-id/md-name-nas:iscsi}}<br />
{{Note|You can use any block device, also raid and lvm devices. You can also use files when you go to fileio instead of block.}}<br />
<br />
You then create an iSCSI Qualified Name (iqn) and a target portal group (tpg) with {{bc|...> cd /iscsi<br>/iscsi> create}}<br />
{{Note|With appending an iqn of your choice to {{ic|create}} you can keep targetcli from automatically creating an iqn}}<br />
<br />
In order to tell LIO that your block device should get used as ''backstore'' for the target you issue<br />
{{bc|.../tpg1> cd luns<br>.../tpg1/luns> create /backstores/block/md_block0}}<br />
<br />
Then you need to create a ''portal'', making a daemon listen for incoming connections:<br />
{{bc|.../luns/lun0> cd ../../portals<br>.../portals> create}}<br />
Targetcli will tell you the IP and port where LIO is listening for incoming connections (defaults to 0.0.0.0 (all)).<br />
You will need at least the IP for the clients. The port should be the standard port 3260.<br />
<br />
In order for a client/[[iSCSI Initiator|initiator]] to connect you need to include the iqn of the initiator in the target configuration:<br />
{{bc|...> cd ../../acls<br>.../acls> create iqn.2005-03.org.open-iscsi:SERIAL}}<br />
Instead of {{ic|iqn.2005-03.org.open-iscsi:SERIAL}} you use the iqn of an initiator.<br />
It can normally be found in {{ic|/etc/iscsi/initiatorname.iscsi}}.<br />
You have to do this for every initiator that needs to connect.<br />
Targetcli will automatically map the created lun to the newly created acl.<br />
{{Note|You can change the mapped luns and whether the access should be rw or ro. See {{ic|help create}} at this point in the targetcli shell.}}<br />
<br />
The last thing you have to do in targetcli when everything works is saving the configuration with:<br />
...> cd /<br />
/> saveconfig<br />
The will the configuration in {{ic|/etc/target/saveconfig.json}}.<br />
You can now safely start and stop {{ic|target.service}} without losing your configuration.<br />
{{Tip|You can give a filename as a parameter to {{ic|saveconfig}} and also clear a configuration with {{ic|clearconfig}}}}<br />
<br />
==== Authentication ====<br />
Authentication per CHAP is enabled per default for your targets.<br />
You can either setup passwords or disable this authentication.<br />
<br />
===== Disable Authentication =====<br />
Navigate targetcli to your target (i.e. /iscsi/iqn.../tpg1) and<br />
.../tpg1> set attribute authentication=0<br />
{{Warning|With this setting everybody that knows the iqn of one of your clients (initiators) can access the target. This is for testing or home purposes only.}}<br />
===== Set Credentials =====<br />
Navigate to a certain acl of your target (i.e. /iscsi/iqn.../tpg1/acls/iqn.../) and<br />
...> get auth<br />
will show you the current authentication credentials.<br />
...> set auth userid=foo<br />
...> set auth password=bar<br />
Would enable authentication with foo:bar.<br />
<br />
=== Using (plain) LIO utils ===<br />
You have to install {{AUR|lio-utils}} from [[AUR]] and the dependencies (python2).<br />
<br />
=== Tips & Tricks ===<br />
* With {{ic|targetcli sessions}} you can list the current open sessions. This command is included in the {{AUR|targetcli-fb}} package, but not in ''lio-utils'' or the original ''targetcli''.<br />
<br />
=== Upstream Documentation ===<br />
* [http://www.linux-iscsi.org/wiki/Targetcli targetcli]<br />
* [http://www.linux-iscsi.org/wiki/Lio-utils_HOWTO LIO utils]<br />
* You can also use {{ic|man targetcli}} when you installed the ''free branch'' version {{AUR|targetcli-fb}}.<br />
<br />
== Setup with SCSI Target Framework (STGT/TGT) ==<br />
You will need the Package {{AUR|tgt}} from [[AUR]].<br />
<br />
See: [[TGT iSCSI Target]]<br />
<br />
== Setup with iSCSI Enterprise Target (IET) ==<br />
You will need {{AUR|iscsitarget-kernel}} and {{AUR|iscsitarget-usr}} from [[AUR]].<br />
<br />
=== Create the Target === <br />
Modify /etc/iet/ietd.conf accordingly<br />
<br />
==== Hard Drive Target ====<br />
Target iqn.2010-06.ServerName:desc<br />
Lun 0 Path=/dev/sdX,Type=blockio<br />
<br />
==== File based Target ====<br />
Use "dd" to create a file of the required size, this example is 10GB.<br />
<br />
dd if=/dev/zero of=/root/os.img bs=1G count=10<br />
<br />
Target iqn.2010-06.ServerName:desc<br />
Lun 0 Path=/root/os.img,Type=fileio<br />
<br />
=== Start server services ===<br />
rc.d start iscsi-target<br />
<br />
Also you can "iscsi-target" to DAEMONS in /etc/rc.conf so that it starts up during boot.<br />
<br />
== See also ==<br />
* [[iSCSI Boot]] Booting Arch Linux with / on an iSCSI target.<br />
* [[Persistent block device naming]] in order to use the correct block device for a target</div>Ploomshttps://wiki.archlinux.org/index.php?title=Simple_IP_Failover_with_Heartbeat&diff=298612Simple IP Failover with Heartbeat2014-02-17T22:43:07Z<p>Plooms: /* Methods */</p>
<hr />
<div>[[Category:Networking]]<br />
[[Category:System recovery]]<br />
{{Poor writing|The page needs to be updated according to [[Help:Style]]}}<br />
''This article illustrates a method of implementing VERY simple active/passive IP failover using heartbeat. You are reccomended to familiarize yourself with the concepts of High Availability Clustering in Linux before proceeding to implement these instructions in a live/production environment.(SEE: [http://www.linux-ha.org/wiki/Main_Page Linux-HA])'' <br />
<br />
== Methods ==<br />
For the purposes of this article we will not be configuring pacemaker, we will be using the older style haresources file/method to define our Highly Available Services with heartbeat. <br />
<br />
We will NOT be using a load balancer, or any external resource agents with heartbeat; because of this, this setup will only allow for a 2 node ACTIVE/PASSIVE cluster, and we will be using two PHYSICAL machines. You may however try this on two virtual machines/hosts to test first, I see no reason why this setup wouldn't work in a virtual machine/environment. <br />
<br />
We will have two machines, and at-least 3 IP addresses, for my setup I have 3 publically accessible/WAN IP Adresses, but this may also be done using two internal/LAN IP's & a single(1) Public/WAN IP address.<br />
<br />
The IP/Hostnames/DNS we will be using are as follows:<br />
*ha1.example.com: 100.200.230.1 (WAN)(Virtual IP, not a physical node)<br />
*node1.example.com 100.200.230.2 (WAN OR LAN)(Physical Node/Machine 1)<br />
*node2.example.com 100.200.230.3 (WAN OR LAN)(Physical Node/Machine 2)<br />
<br />
<nowiki>*</nowiki>100.200.230.2 & 100.200.230.3 (the IP's our two heartbeat nodes will be using to communicate with eachother over our Local Area Network, these may be public(WAN) or private(LAN) addresses. Each of these two nodes should preferably be on the same subnet but all that is needed is that each node is able to communicate with the other/vice versa.)<br />
<br />
<nowiki>*</nowiki>100.200.230.1 (our VIRTUAL IP Address that the two node's will "share" & monitor/bring alive if one node should stop communicating)-- This IP Address may be on ANY subnet. This IP Address should be reachable from the public internet(WAN) IF(IF) you plan for your Highly Available services to be reachable from outside of your private Local Area Network (we will be using this ip address to connect to our Highly Available Services on the 'currently' active node...Alternatively this COULD be a private LAN IP address, where your router has your PUBLIC/WAN IP address assigned to it, and you could forward the ports your Highly Available services will use to the 'lan' IP you choose for this, in your router.<br />
<br />
<nowiki>*</nowiki>It is NOT neccesary to have a genuine registered domain name (FQDN) or a DNS server for the purposes of these instructions, heartbeat will use our /etc/hosts file for all heartbeat related hostname/domain name lookups locally(regardless of what /etc/host.conf tells it to use.); but, if you would like to reach your highly available services via a domain name from outside your Local Area Network, you will have to register a domain and fix it up with the proper zone/ns/mx/A record/CNAME definitions, and, optionally install/run ICS Bind with the proper/relevant zone definitions & replicate those definitions in /etc/hosts.<br />
<br />
== Installation ==<br />
Available in [[AUR]]:<br />
<br />
$ yaourt -S ha-glue ha-heartbeat ha-pacemaker ha-resourceagent inetutils net-tools<br />
<br />
== Configuration ==<br />
Edit the main configuration file for heartbeat /etc/ha.d/ha.cf and make it look like this<br />
<br />
deadtime 5<br />
warntime 10<br />
initdead 15<br />
bcast eth0<br />
auto_failback on<br />
node node1<br />
node node2<br />
use_logd yes<br />
<br />
{{Note| 'auto_failback on' will tell heartbeat to prefer node1. If node1 should go down, node2 will take over our Virtual IP Address and start our Highly Available services. When/if node1 comes back alive node2 will transfer(automatically failback) our Virtual IP and Highly Available services back to node1 and stop our H.A services on node 2. If auto_failback off is used, in the previous scenario, node2 would not give our virtual IP and services back to node1 when node1 comes back alive--node1 would remain inactive and node2 would continue to serve our virtual IP/HA services until node2 eventually fails and node1 takes over for it...auto_failback on provides for more session disconnects/connection drops from a user perspective but this may be a neccesary option for you if you do not have shared storage(E.G For user email, NFS/shared storage, etc)... For example, you want to also serve Highly Available NFS exports(data) using your VIRTUAL IP and all of your data is on node1--if auto_failback off is used and node1 fails, users will browse to their NFS share and not see any of the data that exists on node1's NFS shares! }}<br />
<br />
{{note| if you want to use auto_failback off & still share NFS exports/etc this can be overcome by using something like rsync, fam & ssh/scp to frequently copy over (only the new data/see FAM) and keep the two nodes in sync, one could also use DRBD on both nodes to mirror two partitions over ethernet--essentially raid1 over ethernet. DRBD(Avail. in AUR) is the most commonly used tool to solve this problem, if you choose to use drbd for this please note that you want to set your two drbd partitions secondary/secondary(initially) and have heartbeat/haresources file mount your drbd partition as primary on the 'currently' active node ONLY, I'd also HIGHLY reccomend using a cluster friendly FileSystem like GFS2 or OCFS2 for a shared drbd device/partition! Another possible solution is to use pNFS(if you choose to use NFS to share data that will be simultaneously accessed/written to/altered on/by both nodes at the same time to prevent file corruption, or at the least, prevent any drastic issues with .lock files... }}<br />
<br />
{{note|1=using 'bcast eth0' tells heartbeat to send out the 'beats/heartbeats/ack/alive queries' using broadcast using the interface eth0. You may also use unicast if you want, like so: 'ucast eth0 put.the.other.nodes.ip.address.here', or serial using two lines 'baud 9200' & '/dev/ttyS0' if you use serial to communicate you will need a null modem/crossover serial cable. It is also possible to use a second network interface in each node and connect them using a RJ45/cat5e crossover cable, in a crossover cable setup you would likely want to use unicast but broadcast will work also. For a two node active/passive heartbeat cluster I prefer to use a null modem/crossover serial cable to connect the two heartbeat nodes as it is far simpler and less likely to have issues, it also doesnt require the installation of any extra NIC cards in our nodes, but for the purposes of this article we will use broadcast on the very same interface(eth0) our two heartbeat nodes are already using to connect to our router/switch on, with no extra heartbeat specific NIC card/no extra hardware of any kind(Because of this, any router/switch failures in our node1<==(?switch/router)?==>node2 signal path will throw our cluster offline & bring down our H.A Services on BOTH nodes, this is why I prefer a null modem/serial crossover cable setup! generally you would want the signal path the 'heartbeats' will be sent on/e.g the path the two heartbeat 'nodes' will communicate with to be as simple, direct, & reliable as possible!) }}<br />
<br />
{{Note| If you should choose to use unicast in your /etc/ha.d/haresources file, for say, a dedicated heartbeat connection with a cat5e crossover cable & extra NICS in our two nodes, the /etc/ha.d/haresources file on each node should represent the other nodes IP Address, for example, the unicast line in node1's haresources file would look like this 'ucast eth0 node2.ip.address.here', and the unicast line in haresources on node2 would be the opposite like such 'ucast eth0 node1.ip.address.here', '''in ALL other scenarios, our haresources file should be 100% identical on both nodes. }}<br />
<br />
<br />
'''03. Edit/create the file /etc/ha.d/haresources as follows:'''<br />
<br />
node1 IPaddr::100.200.230.1/100.200.230.6 named httpd mysqld<br />
<br />
{{Note| this file should be EXACTLY the same on both nodes!!! <br />
...explanation: node1 is the hostname of our first/primary node(see above), IPaddr::100.200.230.3/100.200.230.6 is the virtual IP address, and the gateway/device we are expecting to assign this 'virtual' WAN IP address to us. In most circumstances the gateway definition will not be neccesary(EG. remove '/100.200.230.6 so it reads IPaddr::100.200.230.3' but for my purposes, because of some funky behaviour of my att 2wire u-verse gateway and how it assigns it's static IP addresses, heartbeat would fail to bring up the virtual IP address/eth0:0 alias without this specifically defined in the haresources file. Please remove 'named httpd mysqld' from haresources if you will not be running BIND, Apache/other web server, MySQL Server, otherwise heartbeat will try to start bind, apache, and mysql server and COMPLETELY FAIL to start if you do not have startup scripts for these in /etc/rc.d/(heartbeat looks for startup scripts/resource agents in BOTH: /etc/rc.d/* & /etc/ha.d/resource.d, the resource agent 'IPaddr is in /etc/ha.d/resource.d for example) }}<br />
<br />
{{Note| You MAY define more then one(as many as you like) VIRTUAL IP Adresses, this is desirable in MOST scenarios, for example, you dns registrar requires that you have two name servers and you would like to host your own dns zone. For this our haresources file would look like: 'node1 IPaddr::100.200.230.1/100.200.230.6 IPaddr::100.200.230.0/100.200.230.6 named httpd mysqld' where we have two highly available IP's 100.200.230.0 & 100.200.230.1 }}<br />
<br />
'''04. at the end of the line 'node1 IPaddr::100.200.230.1/100.200.230.6' our /etc/ha.d/haresources file append(on the same line) the name of the startup scripts/resource agents for whatever service you would like heartbeat to make highly available. In the example above in step 03. We have told heartbeat to manage named(BIND) httpd(APACHE) mysqld(MYSQL Server), see here:'''<br />
<br />
IPaddr::100.200.230.1/100.200.230.6 IPaddr::100.200.230.0/100.200.230.6 [[named httpd mysqld]]<br />
<br />
{{Note| any services you define in your /etc/ha.d/haresources file MUST NOT be defined in your /etc/rc.conf daemons array, or heartbeat will fail when trying to start the service(it will already be running). It is not reccomended to tell heartbeat to manage services like sshd(SSH) eg. if we tell heartbeat to manage sshd(SSH), we will only be able to ssh into whatever node is currently active as our master node, and not the slave node.}}<br />
<br />
<br />
'''05. Edit /etc/rc.conf and place heartbeat at the END of your daemons array...'''<br />
<br />
That's it! Fire up both nodes, pull the plug on your primary node/node1, and check node two to see that it has taken over your H.A Services & Virtual IP Address. E.G. 'ip addr show','ps aux'...</div>Ploomshttps://wiki.archlinux.org/index.php?title=Very_Secure_FTP_Daemon&diff=297176Very Secure FTP Daemon2014-02-13T10:19:33Z<p>Plooms: spell check</p>
<hr />
<div>[[Category:File Transfer Protocol]]<br />
[[cs:Very Secure FTP Daemon]]<br />
[[es:Very Secure FTP Daemon]]<br />
[[it:Very Secure FTP Daemon]]<br />
[[ru:Very Secure FTP Daemon]]<br />
[[zh-CN:Very Secure FTP Daemon]]<br />
'''vsftpd''' (Very Secure FTP Daemon) is a lightweight, stable and secure FTP server for UNIX-like systems.<br />
<br />
== Installation ==<br />
Simply install {{pkg|vsftpd}} from the [[Official Repositories]].<br />
<br />
To start the server:<br />
# systemctl start vsftpd.service<br />
<br />
If you want it to be started automatically at boot:<br />
# systemctl enable vsftpd.service<br />
<br />
See the xinetd section below for procedures to use vsftpd with xinetd.<br />
<br />
== Configuration ==<br />
Most of the settings in vsftpd are done by editing the file {{ic|/etc/vsftpd.conf}}. The file itself is well-documented, so this section only highlights some important changes you may want to modify. For all available options and documentation, one can man vsftpd.conf (5) (or visit [https://security.appspot.com/vsftpd/vsftpd_conf.html vsftpd.conf online manpage]). Files are served by default from {{ic|/srv/ftp}}.<br />
<br />
=== Enabling uploading ===<br />
The {{Ic|WRITE_ENABLE}} flag must be set to YES in {{ic|/etc/vsftpd.conf}} in order to allow changes to the filesystem, such as uploading:<br />
write_enable=YES<br />
<br />
=== Local user login ===<br />
One must set the line to {{ic|/etc/vsftpd.conf}} to allow users in {{ic|/etc/passwd}} to login:<br />
local_enable=YES<br />
<br />
=== Anonymous login ===<br />
The line in {{ic|/etc/vsftpd.conf}} controls whether anonymous users can login:<br />
# Allow anonymous login<br />
anonymous_enable=YES<br />
# No password is required for an anonymous login <br />
no_anon_password=YES<br />
# Maximum transfer rate for an anonymous client in Bytes/second <br />
anon_max_rate=30000 <br />
# Directory to be used for an anonymous login <br />
anon_root=/example/directory/<br />
<br />
=== Chroot jail ===<br />
One can set up a chroot environment which prevents the user from leaving its home directory. To enable this, add the following lines to {{ic|/etc/vsftpd.conf}}:<br />
chroot_list_enable=YES<br />
chroot_list_file=/etc/vsftpd.chroot_list<br />
The {{Ic|chroot_list_file}} variable specifies the file which contains users that are jailed.<br />
<br />
For a more restricted environment, one can specify the line:<br />
chroot_local_user=YES<br />
This will make local users jailed by default. In this case, the file specified by {{Ic|chroot_list_file}} lists users that are '''not''' in a chroot jail.<br />
<br />
=== Limiting user login ===<br />
It's possible to prevent users from logging into the FTP server by adding two lines to {{ic|/etc/vsftpd.conf}}:<br />
userlist_enable=YES<br />
userlist_file=/etc/vsftpd.user_list<br />
{{Ic|userlist_file}} now specifies the file which lists users that are not able to login.<br />
<br />
If you only want to allow certain users to login, add the line:<br />
userlist_deny=NO<br />
The file specified by {{Ic|userlist_file}} will now contain users that are able to login.<br />
<br />
=== Limiting connections ===<br />
One can limit the data transfer rate, number of clients and connections per IP for local users by adding the information in {{ic|/etc/vsftpd.conf}}:<br />
local_max_rate=1000000 # Maximum data transfer rate in bytes per second<br />
max_clients=50 # Maximum number of clients that may be connected<br />
max_per_ip=2 # Maximum connections per IP<br />
<br />
=== Using xinetd ===<br />
{{Out of date|Contains reference to rc.conf, which does not exist anymore.}}<br />
If you want to use vsftpd with xinetd, add the following lines to {{ic|/etc/xinetd.d/vsftpd}}:<br />
<pre><br />
service ftp<br />
{<br />
socket_type = stream<br />
wait = no<br />
user = root<br />
server = /usr/bin/vsftpd<br />
log_on_success += HOST DURATION<br />
log_on_failure += HOST<br />
disable = no<br />
}<br />
</pre><br />
<br />
The option below should be set in {{ic|/etc/vsftpd.conf}}:<br />
pam_service_name=ftp<br />
<br />
Finally, add xinetd to your daemons line in {{ic|/etc/rc.conf}}. You do not need to add vsftpd, as it will be called by xinetd whenever necessary.<br />
<br />
If you get errors like this while connecting to the server:<br />
500 OOPS: cap_set_proc<br />
You need to add ''capability'' in MODULES= line in {{ic|/etc/rc.conf}}.<br />
<br />
While upgrading to version 2.1.0 you might get an error like this when connecting to the server from a client:<br />
500 OOPS: could not bind listening IPv4 socket<br />
In earlier versions it has been enough to leave the following lines commented:<br />
# Use this to use vsftpd in standalone mode, otherwise it runs through (x)inetd<br />
# listen=YES<br />
In this newer version, and maybe future releases, it is necessary however to explicitly configure it to '''not''' run in a standalone mode, like this:<br />
# Use this to use vsftpd in standalone mode, otherwise it runs through (x)inetd<br />
listen=NO<br />
<br />
=== Using SSL to Secure FTP ===<br />
<br />
Generate an SSL Cert, e.g. like that: <br />
# cd /etc/ssl/certs<br />
# openssl req -x509 -nodes -days 7300 -newkey rsa:2048 -keyout /etc/ssl/certs/vsftpd.pem -out /etc/ssl/certs/vsftpd.pem<br />
# chmod 600 /etc/ssl/certs/vsftpd.pem<br />
You will be asked a lot of Questions about your Company etc., as your Certificate is not a trusted one it doesn't really matter what you fill in. You will use this for encryption! If you plan to use this in a matter of trust get one from a CA like thawte, verisign etc. <br />
<br />
edit your configuration {{ic|/etc/vsftpd.conf}}<br />
<pre><br />
#this is important<br />
ssl_enable=YES<br />
<br />
#choose what you like, if you accept anon-connections<br />
# you may want to enable this<br />
# allow_anon_ssl=NO<br />
<br />
#choose what you like,<br />
# it's a matter of performance i guess<br />
# force_local_data_ssl=NO<br />
<br />
#choose what you like<br />
force_local_logins_ssl=YES<br />
<br />
#you should at least enable this if you enable ssl...<br />
ssl_tlsv1=YES<br />
#choose what you like<br />
ssl_sslv2=YES<br />
#choose what you like<br />
ssl_sslv3=YES<br />
#give the correct path to your currently generated *.pem file<br />
rsa_cert_file=/etc/ssl/certs/vsftpd.pem<br />
#the *.pem file contains both the key and cert<br />
rsa_private_key_file=/etc/ssl/certs/vsftpd.pem<br />
</pre><br />
<br />
=== Dynamic DNS ===<br />
Make sure you put the following two lines in {{ic|/etc/vsftpd.conf}}:<br />
pasv_addr_resolve=YES<br />
pasv_address=yourdomain.noip.info<br />
It is '''not''' necessary to use a script that updates pasv_address periodically and restarts the server, as it can be found elsewhere!<br />
{{Note|You won't be able to connect in passive mode via LAN anymore. Try the active mode on your LAN PC's FTP client.}}<br />
<br />
<br />
=== Port configurations ===<br />
Especially for private FTP servers that are exposed to the web it's recommended to change the listening port to something other than the standard port 21. This can be done using the following lines in {{ic|/etc/vsftpd.conf}}:<br />
listen_port=2211<br />
Furthermore a custom passive port range can be given by:<br />
pasv_min_port=49152<br />
pasv_max_port=65534<br />
<br />
=== Configuring iptables ===<br />
Often the server running the FTP daemon is protected by an [[iptables]] firewall. To allow access to the FTP server the corresponding port needs to be opened using something like<br />
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 2211 -j ACCEPT<br />
This article won't provide any instruction on how to set up iptables but here is an example: [[Simple stateful firewall]].<br />
<br />
There are some kernel modules needed for proper FTP connection handling by iptables that should be referenced here. Among those especially ''ip_conntrack_ftp''. It is needed as FTP uses the given ''listen_port'' (21 by default) for commands only; all the data transfer is done over different ports. These ports are chosen by the FTP daemon at random and for each session (also depending on whether active or passive mode is used). To tell iptables that packets on ports should be accepted, ''ip_conntrack_ftp'' is required. To load it automatically on boot create a new file in {{ic|/etc/modules-load.d}} e.g.:<br />
# echo ip_conntrack_ftp > /etc/modules-load.d/ip_conntrack_ftp.conf<br />
<br />
If you changed the ''listen_port'' you also need to configure the conntrack module accordingly:<br />
{{hc|/etc/modprobe.d/ip_conntrack_ftp.conf|<nowiki><br />
options nf_conntrack_ftp ports=2211<br />
options ip_conntrack_ftp ports=2211</nowiki>}}<br />
<br />
== Tips and tricks ==<br />
=== PAM with virtual users ===<br />
Using virtual users has the advantage of not requiring a real login account on the system. Keeping the environment in a container is of course a more secure option.<br />
<br />
A virtual users database has to be created by first making a simple text file like this:<br />
user1<br />
password1<br />
user2<br />
password2<br />
Include as many virtual users as you wish according to the structure in the example. Save it as logins.txt; the file name does not have any significance. Next step depends on Berkeley database system, which is included in the core system of Arch. As root create the actual database with the help of the logins.txt file, or what you chose to call it:<br />
# db_load -T -t hash -f logins.txt /etc/vsftpd_login.db<br />
{{Warning|The above command gives the wrong impression that db_load hashes the passwords. A user with read access to the database file will be able print out the passwords back onto the terminal like so<br />
# strings /etc/vsftpd_login.db <br />
which demonstrates that db_load does NOT hash the passwords.}}<br />
It is recommended to restrict permissions for the now created {{ic|vsftpd_login.db}} file:<br />
# chmod 600 /etc/vsftpd_login.db<br />
{{Warning|Be aware that stocking passwords in plain text is not safe. Don't forget to remove your temporary file with {{Ic|rm logins.txt}}.}}<br />
PAM should now be set to make use of vsftpd_login.db. To make PAM check for user authentication create a file named ftp in the {{ic|/etc/pam.d/}} directory with the following information:<br />
auth required pam_userdb.so db=/etc/vsftpd_login crypt=hash <br />
account required pam_userdb.so db=/etc/vsftpd_login crypt=hash<br />
{{Note|We use /etc/vsftpd_login without .db extension in PAM-config!}}<br />
Now it is time to create a home for the virtual users. In the example {{ic|/srv/ftp}} is decided to host data for virtual users, which also reflects the default directory structure of Arch. First create the general user virtual and make {{ic|/srv/ftp}} its home:<br />
# useradd -d /srv/ftp virtual<br />
Make virtual the owner:<br />
# chown virtual:virtual /srv/ftp<br />
Configure vsftpd to use the created environment by editing {{ic|/etc/vsftpd.conf}}. These are the necessary settings to make vsftpd restrict access to virtual users, by user-name and password, and restrict their access to the specified area {{ic|/srv/ftp}}:<br />
anonymous_enable=NO<br />
local_enable=YES<br />
chroot_local_user=YES<br />
guest_enable=YES<br />
guest_username=virtual<br />
virtual_use_local_privs=YES<br />
If the xinetd method is used start the service. You should now only be allowed to login by user-name and password according to the made database.<br />
<br />
==== Adding private folders for the virtual users ====<br />
First create directories for users:<br />
# mkdir /srv/ftp/user1<br />
# mkdir /srv/ftp/user2<br />
# chown virtual:virtual /srv/ftp/user?/<br />
<br />
Then, add the following lines to {{ic|/etc/vsftpd.conf}}:<br />
local_root=/srv/ftp/$USER<br />
user_sub_token=$USER<br />
<br />
== Troubleshooting ==<br />
<br />
=== vsftpd: no connection (Error 500) with recent kernels (3.5 and newer) and .service ===<br />
add this to your /etc/vsftpd.conf<br />
seccomp_sandbox=NO<br />
<br />
=== vsftpd: refusing to run with writable root inside chroot() ===<br />
As of vsftpd 2.3.5, the chroot directory that users are locked to must not be writable. This is in order to prevent a security vulnerabilty.<br />
<br />
The safe way to allow upload is to keep chroot enabled, and configure your FTP directories.<br />
<br />
local_root=/srv/ftp/user<br />
<br />
# mkdir -p /srv/ftp/user/upload<br />
#<br />
# chmod 550 /srv/ftp/user<br />
# chmod 750 /srv/ftp/user/upload<br />
<br />
<br />
If you must:<br />
<br />
You can put this into your {{ic|/etc/vsftpd.conf}} to workaround this security enhancement (since vsftpd 3.0.0; from [http://www.benscobie.com/fixing-500-oops-vsftpd-refusing-to-run-with-writable-root-inside-chroot/ Fixing 500 OOPS: vsftpd: refusing to run with writable root inside chroot ()]):<br />
allow_writeable_chroot=YES<br />
or alternative:<br />
<br />
Install vsftpd-ext from AUR and set in the conf file allow_writable_root=YES<br />
<br />
=== FileZilla Client: GnuTLS error -8 when connecting via SSL ===<br />
vsftpd tries to display plain-text error messages in the SSL session. In order to debug this, temporarily disable encryption and you will see the correct error message.[http://ramblings.linkerror.com/?p=45]<br />
<br />
=== vsftpd.service fails to run on boot ===<br />
If you have enabled the vsftpd service and it fails to run on boot, make sure it is set to load after network.target in the service file:<br />
<br />
{{hc|/usr/lib/systemd/system/vsftpd.service|2=<br />
[Unit]<br />
Description=vsftpd daemon<br />
After=network.target<br />
}}<br />
<br />
== See also ==<br />
* [http://vsftpd.beasts.org/ vsftpd official homepage]<br />
* [http://vsftpd.beasts.org/vsftpd_conf.html vsftpd.conf man page]</div>Ploomshttps://wiki.archlinux.org/index.php?title=Preboot_Execution_Environment&diff=294571Preboot Execution Environment2014-01-27T03:49:38Z<p>Plooms: Undo revision 294569 by Plooms (talk)</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:Networking]]<br />
[[Category:Boot process]]<br />
[[es:PXE]]<br />
[[fr:Install PXE]]<br />
{{Related articles start}}<br />
{{Related|Diskless System}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Preboot Execution Environment]]:<br />
:''The Preboot eXecution Environment (PXE, also known as Pre-Execution Environment; sometimes pronounced "pixie") is an environment to boot computers using a network interface independently of data storage devices (like hard disks) or installed operating systems.''<br />
<br />
In this guide, PXE is used to boot the installation media with an appropriate option-rom that supports PXE on the target.<br />
<br />
== Preparation ==<br />
<br />
Download the latest official install media from [https://www.archlinux.org/download/ here].<br />
<br />
Next mount the image:<br />
<br />
{{bc|1=<br />
# mkdir -p /mnt/archiso<br />
# mount -o loop,ro archlinux-2013.11.01-dual.iso /mnt/archiso}}<br />
<br />
== Server setup ==<br />
<br />
You will need to setup a DHCP, TFTP, and HTTP server to configure networking, load pxelinux/kernel/initramfs, and finally load the root filesystem (respectively).<br />
<br />
=== Network ===<br />
<br />
Bring up your wired NIC, and assign it an address appropriately.<br />
<br />
{{bc|1=<br />
# ip link set eth0 up<br />
# ip addr add 192.168.0.1/24 dev eth0}}<br />
<br />
=== DHCP + TFTP ===<br />
<br />
You will need both a DHCP and TFTP server to configure networking on the install target and to facilitate the transfer of files between the PXE server and client; dnsmasq does both, and is extremely easy to set up.<br />
<br />
Install {{pkg|dnsmasq}}:<br />
<br />
{{bc|# pacman -S dnsmasq}}<br />
<br />
Configure {{pkg|dnsmasq}}:<br />
<br />
{{hc|# vim /etc/dnsmasq.conf|2=<br />
port=0<br />
interface=eth0<br />
bind-interfaces<br />
dhcp-range=192.168.0.50,192.168.0.150,12h<br />
dhcp-boot=/arch/boot/syslinux/lpxelinux.0<br />
dhcp-option-force=209,boot/syslinux/archiso.cfg<br />
dhcp-option-force=210,/arch/<br />
enable-tftp<br />
tftp-root=/mnt/archiso}}<br />
<br />
Start {{pkg|dnsmasq}}:<br />
<br />
{{bc|# systemctl start dnsmasq.service}}<br />
<br />
=== HTTP ===<br />
<br />
Thanks to recent changes in [[Archiso|archiso]], it is now possible to boot from HTTP (archiso_pxe_http initcpio hook) or NFS (archiso_pxe_nfs initcpio hook); among all alternatives, darkhttpd is by far the most trivial to setup (and the lightest-weight).<br />
<br />
First, install {{pkg|darkhttpd}}:<br />
<br />
{{bc|# pacman -S darkhttpd}}<br />
<br />
Then start {{pkg|darkhttpd}} using our /mnt/archiso as the document root:<br />
<br />
{{hc|# darkhttpd /mnt/archiso|2=<br />
darkhttpd/1.8, copyright (c) 2003-2011 Emil Mikulic.<br />
listening on: http://0.0.0.0:80/}}<br />
<br />
== Installation ==<br />
<br />
For this portion you will need to figure out how to tell the client to attempt a PXE boot; in the corner of the screen along with the normal post messages, usually there will be some hint on which key to press to try PXE booting first. On an IBM x3650 ''F12'' brings up a boot menu, the first option of which is ''Network''; on a Dell PE 1950/2950 pressing ''F12'' initiates PXE booting directly.<br />
<br />
=== Boot ===<br />
<br />
Looking at [[Systemd#Journal|journald]] on the PXE server will provide some additional insight to what exactly is going on during the early stages of the PXE boot process:<br />
{{hc|<nowiki># journalctl -u dnsmasq -f</nowiki>|2=<br />
<nowiki><br />
dnsmasq-dhcp[2544]: DHCPDISCOVER(eth1) 00:1a:64:6a:a2:4d <br />
dnsmasq-dhcp[2544]: DHCPOFFER(eth1) 192.168.0.110 00:1a:64:6a:a2:4d <br />
dnsmasq-dhcp[2544]: DHCPREQUEST(eth1) 192.168.0.110 00:1a:64:6a:a2:4d <br />
dnsmasq-dhcp[2544]: DHCPACK(eth1) 192.168.0.110 00:1a:64:6a:a2:4d <br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/pxelinux.0 to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/whichsys.c32 to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_pxe_choose.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/ifcpu64.c32 to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_pxe_both_inc.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_head.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_pxe32.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_pxe64.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_tail.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/vesamenu.c32 to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/splash.png to 192.168.0.110</nowiki>}}<br />
<br />
After you load {{ic|pxelinux.0}} and {{ic|archiso.cfg}} via TFTP, you will (hopefully) be presented with a [[Syslinux|syslinux]] boot menu with several options, two of which are of potential usefulness to us.<br />
<br />
Select either<br />
<br />
{{bc|Boot Arch Linux (x86_64) (HTTP)}}<br />
<br />
or<br />
<br />
{{bc|Boot Arch Linux (i686) (HTTP)}}<br />
<br />
depending on your CPU architecture. <br />
<br />
Next the kernel and initramfs (appropriate for the architecture you selected) will be transferred, again via TFTP:<br />
<br />
{{bc|1=<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/x86_64/vmlinuz to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/x86_64/archiso.img to 192.168.0.110}}<br />
<br />
If all goes well, you should then see activity on darkhttpd coming from the PXE-target; at this point the kernel would be loaded on the PXE-target, and in init: <br />
<br />
{{bc|1=<br />
1348347586 192.168.0.110 "GET /arch/aitab" 200 678 "" "curl/7.27.0"<br />
1348347587 192.168.0.110 "GET /arch/x86_64/root-image.fs.sfs" 200 107860206 "" "curl/7.27.0"<br />
1348347588 192.168.0.110 "GET /arch/x86_64/usr-lib-modules.fs.sfs" 200 36819181 "" "curl/7.27.0"<br />
1348347588 192.168.0.110 "GET /arch/any/usr-share.fs.sfs" 200 63693037 "" "curl/7.27.0"}}<br />
<br />
After the root filesystem is downloaded via HTTP, you will eventually end up at a root zsh prompt with that fancy [https://www.archlinux.org/packages/extra/any/grml-zsh-config/ grml config].<br />
<br />
=== Post-boot ===<br />
<br />
Unless you want all traffic to be routed through your PXE server (which will not work anyway unless you [[Simple Stateful Firewall#Setting up a NAT gateway|set it up properly]]), you will want to kill {{pkg|dnsmasq}} and get a new lease on the install target, as appropriate for your network layout.<br />
<br />
{{bc|# systemctl stop dnsmasq.service}}<br />
<br />
You can also kill {{pkg|darkhttpd}}; the target has already downloaded the root filesystem, so it's no longer needed. While you are at it, you can also unmount the installation image:<br />
<br />
{{bc|# umount /mnt/archiso}}<br />
<br />
At this point you can follow the [[Installation_Guide|official installation guide]].<br />
<br />
== Alternate Methods ==<br />
<br />
As implied in the syslinux menu, there are several other alternatives:<br />
<br />
=== NFS ===<br />
<br />
You will need to set up an [[NFS|NFS server]] with an export at the root of your mounted installation media, which would be {{ic|/mnt/archiso}} if you followed the [[#Preparation|earlier sections]] of this guide. After setting up the server, add the following line to your {{ic|/etc/exports}} file:<br />
<br />
{{hc|/etc/exports|/mnt/archiso 192.168.0.0/24(ro,no_subtree_check)}}<br />
<br />
If the server was already running, re-export the filesystems with {{ic|exportfs -r -a -v}}.<br />
<br />
The default settings in the installer expect to find the NFS at {{ic|/run/archiso/bootmnt}}, so you will need to edit the boot options. To do this, press Tab on the appropriate boot menu choice and edit the {{ic|archiso_nfs_srv}} option accordingly:<br />
<br />
{{bc|1=archiso_nfs_srv=${pxeserver}:/mnt/archiso}}<br />
<br />
Alternatively, you can use {{ic|/run/archiso/bootmnt}} for the entire process.<br />
<br />
After the kernel loads, the Arch bootstrap image will copy the root filesystem via NFS to the booting host. This can take a little while. Once this completes, you should have a running system.<br />
<br />
=== NBD ===<br />
<br />
{{Accuracy|verify}}<br />
{{Expansion}}<br />
<br />
Install {{pkg|nbd}} and configure it:<br />
<br />
{{hc|# vim /etc/nbd-server/config|2=<br />
[generic]<br />
[archiso]<br />
readonly = true<br />
exportname = /srv/archlinux-2013.02.01-dual.iso}}<br />
<br />
=== Low memory ===<br />
<br />
The {{ic|copytoram}} [[mkinitcpio|initramfs]] option can be used to control whether the root filesystem should be copied to ram in its entirety in early-boot.<br />
<br />
It highly recommended to leave this option alone, and should only be disabled if entirely necessary (systems with less than ~256MB physical memory). Append {{ic|<nowiki>copytoram=n</nowiki>}} to your kernel line if you wish to do so. You should be aware that this option currently does not work if using HTTP for the transfer; NFS or NBD must be used.</div>Ploomshttps://wiki.archlinux.org/index.php?title=Preboot_Execution_Environment&diff=294569Preboot Execution Environment2014-01-27T03:28:59Z<p>Plooms: </p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:Networking]]<br />
[[Category:Boot process]]<br />
[[es:PXE]]<br />
[[fr:Install PXE]]<br />
{{Related articles start}}<br />
{{Related|Diskless System}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Preboot Execution Environment]]:<br />
:''The Preboot eXecution Environment (PXE, also known as Pre-Execution Environment; sometimes pronounced "pixie") is an environment to boot computers using a network interface independently of data storage devices (like hard disks) or installed operating systems.''<br />
<br />
In this guide, PXE is used to boot the installation media with an appropriate option-rom that supports PXE on the target.<br />
<br />
== Preparation ==<br />
<br />
Download the latest official install media from [https://www.archlinux.org/download/ here].<br />
<br />
Next mount the image:<br />
<br />
{{bc|1=<br />
# mkdir -p /mnt/archiso<br />
# mount -o loop,ro archlinux-2013.11.01-dual.iso /mnt/archiso}}<br />
<br />
== Server setup ==<br />
<br />
You will need to setup a DHCP, TFTP, and HTTP server to configure networking, load pxelinux/kernel/initramfs, and finally load the root filesystem (respectively).<br />
<br />
=== Network ===<br />
<br />
Bring up your wired NIC, and assign it an address appropriately.<br />
<br />
{{bc|1=<br />
# ip link set eth0 up<br />
# ip addr add 192.168.0.1/24 dev eth0}}<br />
<br />
=== DHCP + TFTP ===<br />
<br />
You will need both a DHCP and TFTP server to configure networking on the install target and to facilitate the transfer of files between the PXE server and client; dnsmasq does both, and is extremely easy to set up.<br />
<br />
Install {{pkg|dnsmasq}}:<br />
<br />
{{bc|# pacman -S dnsmasq}}<br />
<br />
Configure {{pkg|dnsmasq}}:<br />
<br />
{{hc|# vim /etc/dnsmasq.conf|2=<br />
port=0<br />
interface=eth0<br />
bind-interfaces<br />
dhcp-range=192.168.0.50,192.168.0.150,12h<br />
dhcp-boot=/arch/boot/syslinux/pxelinux.0<br />
dhcp-option-force=209,boot/syslinux/archiso.cfg<br />
dhcp-option-force=210,/arch/<br />
enable-tftp<br />
tftp-root=/mnt/archiso}}<br />
<br />
Start {{pkg|dnsmasq}}:<br />
<br />
{{bc|# systemctl start dnsmasq.service}}<br />
<br />
=== HTTP ===<br />
<br />
Thanks to recent changes in [[Archiso|archiso]], it is now possible to boot from HTTP (archiso_pxe_http initcpio hook) or NFS (archiso_pxe_nfs initcpio hook); among all alternatives, darkhttpd is by far the most trivial to setup (and the lightest-weight).<br />
<br />
First, install {{pkg|darkhttpd}}:<br />
<br />
{{bc|# pacman -S darkhttpd}}<br />
<br />
Then start {{pkg|darkhttpd}} using our /mnt/archiso as the document root:<br />
<br />
{{hc|# darkhttpd /mnt/archiso|2=<br />
darkhttpd/1.8, copyright (c) 2003-2011 Emil Mikulic.<br />
listening on: http://0.0.0.0:80/}}<br />
<br />
== Installation ==<br />
<br />
For this portion you will need to figure out how to tell the client to attempt a PXE boot; in the corner of the screen along with the normal post messages, usually there will be some hint on which key to press to try PXE booting first. On an IBM x3650 ''F12'' brings up a boot menu, the first option of which is ''Network''; on a Dell PE 1950/2950 pressing ''F12'' initiates PXE booting directly.<br />
<br />
=== Boot ===<br />
<br />
Looking at [[Systemd#Journal|journald]] on the PXE server will provide some additional insight to what exactly is going on during the early stages of the PXE boot process:<br />
{{hc|<nowiki># journalctl -u dnsmasq -f</nowiki>|2=<br />
<nowiki><br />
dnsmasq-dhcp[2544]: DHCPDISCOVER(eth1) 00:1a:64:6a:a2:4d <br />
dnsmasq-dhcp[2544]: DHCPOFFER(eth1) 192.168.0.110 00:1a:64:6a:a2:4d <br />
dnsmasq-dhcp[2544]: DHCPREQUEST(eth1) 192.168.0.110 00:1a:64:6a:a2:4d <br />
dnsmasq-dhcp[2544]: DHCPACK(eth1) 192.168.0.110 00:1a:64:6a:a2:4d <br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/pxelinux.0 to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/whichsys.c32 to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_pxe_choose.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/ifcpu64.c32 to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_pxe_both_inc.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_head.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_pxe32.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_pxe64.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/archiso_tail.cfg to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/vesamenu.c32 to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/syslinux/splash.png to 192.168.0.110</nowiki>}}<br />
<br />
After you load {{ic|pxelinux.0}} and {{ic|archiso.cfg}} via TFTP, you will (hopefully) be presented with a [[Syslinux|syslinux]] boot menu with several options, two of which are of potential usefulness to us.<br />
<br />
Select either<br />
<br />
{{bc|Boot Arch Linux (x86_64) (HTTP)}}<br />
<br />
or<br />
<br />
{{bc|Boot Arch Linux (i686) (HTTP)}}<br />
<br />
depending on your CPU architecture. <br />
<br />
Next the kernel and initramfs (appropriate for the architecture you selected) will be transferred, again via TFTP:<br />
<br />
{{bc|1=<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/x86_64/vmlinuz to 192.168.0.110<br />
dnsmasq-tftp[2544]: sent /mnt/archiso/arch/boot/x86_64/archiso.img to 192.168.0.110}}<br />
<br />
If all goes well, you should then see activity on darkhttpd coming from the PXE-target; at this point the kernel would be loaded on the PXE-target, and in init: <br />
<br />
{{bc|1=<br />
1348347586 192.168.0.110 "GET /arch/aitab" 200 678 "" "curl/7.27.0"<br />
1348347587 192.168.0.110 "GET /arch/x86_64/root-image.fs.sfs" 200 107860206 "" "curl/7.27.0"<br />
1348347588 192.168.0.110 "GET /arch/x86_64/usr-lib-modules.fs.sfs" 200 36819181 "" "curl/7.27.0"<br />
1348347588 192.168.0.110 "GET /arch/any/usr-share.fs.sfs" 200 63693037 "" "curl/7.27.0"}}<br />
<br />
After the root filesystem is downloaded via HTTP, you will eventually end up at a root zsh prompt with that fancy [https://www.archlinux.org/packages/extra/any/grml-zsh-config/ grml config].<br />
<br />
=== Post-boot ===<br />
<br />
Unless you want all traffic to be routed through your PXE server (which will not work anyway unless you [[Simple Stateful Firewall#Setting up a NAT gateway|set it up properly]]), you will want to kill {{pkg|dnsmasq}} and get a new lease on the install target, as appropriate for your network layout.<br />
<br />
{{bc|# systemctl stop dnsmasq.service}}<br />
<br />
You can also kill {{pkg|darkhttpd}}; the target has already downloaded the root filesystem, so it's no longer needed. While you are at it, you can also unmount the installation image:<br />
<br />
{{bc|# umount /mnt/archiso}}<br />
<br />
At this point you can follow the [[Installation_Guide|official installation guide]].<br />
<br />
== Alternate Methods ==<br />
<br />
As implied in the syslinux menu, there are several other alternatives:<br />
<br />
=== NFS ===<br />
<br />
You will need to set up an [[NFS|NFS server]] with an export at the root of your mounted installation media, which would be {{ic|/mnt/archiso}} if you followed the [[#Preparation|earlier sections]] of this guide. After setting up the server, add the following line to your {{ic|/etc/exports}} file:<br />
<br />
{{hc|/etc/exports|/mnt/archiso 192.168.0.0/24(ro,no_subtree_check)}}<br />
<br />
If the server was already running, re-export the filesystems with {{ic|exportfs -r -a -v}}.<br />
<br />
The default settings in the installer expect to find the NFS at {{ic|/run/archiso/bootmnt}}, so you will need to edit the boot options. To do this, press Tab on the appropriate boot menu choice and edit the {{ic|archiso_nfs_srv}} option accordingly:<br />
<br />
{{bc|1=archiso_nfs_srv=${pxeserver}:/mnt/archiso}}<br />
<br />
Alternatively, you can use {{ic|/run/archiso/bootmnt}} for the entire process.<br />
<br />
After the kernel loads, the Arch bootstrap image will copy the root filesystem via NFS to the booting host. This can take a little while. Once this completes, you should have a running system.<br />
<br />
=== NBD ===<br />
<br />
{{Accuracy|verify}}<br />
{{Expansion}}<br />
<br />
Install {{pkg|nbd}} and configure it:<br />
<br />
{{hc|# vim /etc/nbd-server/config|2=<br />
[generic]<br />
[archiso]<br />
readonly = true<br />
exportname = /srv/archlinux-2013.02.01-dual.iso}}<br />
<br />
=== Low memory ===<br />
<br />
The {{ic|copytoram}} [[mkinitcpio|initramfs]] option can be used to control whether the root filesystem should be copied to ram in its entirety in early-boot.<br />
<br />
It highly recommended to leave this option alone, and should only be disabled if entirely necessary (systems with less than ~256MB physical memory). Append {{ic|<nowiki>copytoram=n</nowiki>}} to your kernel line if you wish to do so. You should be aware that this option currently does not work if using HTTP for the transfer; NFS or NBD must be used.</div>Plooms