https://wiki.archlinux.org/api.php?action=feedcontributions&user=Timemaster&feedformat=atomArchWiki - User contributions [en]2024-03-29T06:13:37ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=CurlFtpFS&diff=490504CurlFtpFS2017-09-17T16:37:53Z<p>Timemaster: Add details when curlftpfs fail with non descriptive error because of ssl</p>
<hr />
<div>[[Category:File systems]]<br />
[[Category:File Transfer Protocol]]<br />
[[ja:CurlFtpFS]]<br />
[[zh-hans:CurlFtpFS]]<br />
{{Expansion}}<br />
<br />
{{Related articles start}}<br />
{{Related|List of applications/Internet#FTP clients}}<br />
{{Related articles end}}<br />
<br />
{{Note|As of February 2015, curlftpfs is reported to be extremely slow, see for example [https://bugs.launchpad.net/ubuntu/+source/curlftpfs/+bug/1267749 a Ubuntu bug report] and a [http://stackoverflow.com/questions/24360479/ftp-with-curlftpfs-is-extremely-slow-to-the-point-it-is-impossible-to-work-with stackoverflow.com question].}}<br />
<br />
[http://curlftpfs.sourceforge.net/ CurlFtpFS] is a filesystem for accessing FTP hosts based on FUSE and libcurl.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|curlftpfs}} package.<br />
<br />
If needed, make sure that fuse has been started.<br />
# modprobe fuse<br />
<br />
== Mount FTP folder as root ==<br />
<br />
Create the mount point and then mount the FTP folder.<br />
# mkdir /mnt/ftp<br />
# curlftpfs ftp.yourserver.com /mnt/ftp/ -o user=username:password<br />
If you want to give other (regular) users access right, use the {{ic|allow_other}} option:<br />
# curlftpfs ftp.yourserver.com /mnt/ftp/ -o user=username:password,allow_other<br />
Do not add space after the comma or the {{ic|allow_other}} argument will not be recognized.<br />
<br />
To use FTP in active mode add the option 'ftp_port=-':<br />
# curlftpfs ftp.yourserver.com /mnt/ftp/ -o user=username:password,allow_other,ftp_port=-<br />
<br />
You can add this line to /etc/fstab to mount automatically.<br />
curlftpfs#USER:PASSWORD@ftp.domain.org /mnt/mydomainorg fuse auto,user,uid=1000,allow_other,_netdev 0 0<br />
<br />
{{Tip|1=You can use codepage="''string''" when having problems with non-US English characters on servers that do not support UTF8, e.g. codepage="iso8859-1"}}<br />
<br />
To prevent the password to be shown in the process list, create a {{ic|.netrc}} file in the home directory of the user running curlftpfs and {{ic|chmod 600}} with the following content:<br />
<br />
machine ftp.yourserver.com<br />
login username<br />
password mypassword<br />
<br />
== Mount FTP folder as normal user ==<br />
<br />
You can also mount as normal user (always use the {{ic|.netrc}} file for the credentials and ssl encryption!):<br />
$ mkdir ~/my-server<br />
<nowiki>$ curlftpfs -o ssl,utf8 ftp://my-server.tld/ ~/my-server</nowiki><br />
if the answer is<br />
Error connecting to ftp: QUOT command failed with 500<br />
then the server does not support the {{ic|utf8}} option. Leave it out and all will be fine.<br />
{{Tip|1=If need be try setting the encoding with for example -o codepage="iso8859-1"}}<br />
<br />
To unmount:<br />
$ fusermount -u ~/my-server<br />
<br />
== Connect to encrypted server==<br />
In it's default settings, CurlFtpFS will authenticate in cleartext when connecting to a non encrypted connection port. If the remote server is configured to refuse non encrypted authentification method / force encrypted authentification, CurlFtpFS will return a <br />
# Error connecting to ftp: Access denied: 530<br />
<br />
To authenticate to the ftp server using explicit encrypted authentification, you must specify the ssl or tsl option. <br />
# curlftpfs ftp.yourserver.com /mnt/ftp/ -o ssl,user=username:password<br />
<br />
If your server uses a self-generated certificate not thrusted by your computer, you can specify to ignore it<br />
# curlftpfs ftp.yourserver.com /mnt/ftp/ -o ssl,no_verify_peer,no_verify_hostname,user=username:password<br />
<br />
An implicit tsl mode is also available. For more details, check the manual page.</div>Timemasterhttps://wiki.archlinux.org/index.php?title=CrashPlan&diff=403819CrashPlan2015-10-09T02:28:31Z<p>Timemaster: </p>
<hr />
<div>[[Category:Data compression and archiving]]<br />
[[Category:System recovery]]<br />
CrashPlan is a backup program that backs up data to remote servers, other computers, or hard drives. Backing up to the cloud servers requires a monthly subscription.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|crashplan}} from the [[AUR]]. There is also {{AUR|crashplan-pro}} and {{AUR|crashplan-pro-e}} available which are the paid enterprise packages.<br />
<br />
==Basic Usage==<br />
<br />
Before accessing CrashPlan's graphical user interface, you should start the service:<br />
<br />
# systemctl start crashplan.service<br />
<br />
CrashPlan can be configured entirely through its graphical user interface. To start the graphical interface:<br />
<br />
$ CrashPlanDesktop<br />
<br />
To make CrashPlan automatically start upon system startup:<br />
<br />
# systemctl enable crashplan.service<br />
<br />
==Running Crashplan on a headless server==<br />
<br />
Running CrashPlan on a headless server is not officially supported. However, it is possible to do so.<br />
<br />
The CrashPlan daemon's configuration files (in {{ic|/opt/crashplan/conf}}) are in an obscure XML format, and they are meant to be edited programmatically by the CrashPlan client. The CrashPlan client and daemon communicate on port 4243 by default. Thus, an easy way of configuring the CrashPlan daemon on a headless server is to create an SSH tunnel:<br />
<br />
# Start the CrashPlan daemon. On the server: {{ic|systemctl start crashplan.service}}.<br />
# Create an SSH tunnel. On the client: {{ic|ssh -N -L 4243:localhost:4243 headless.example.com}}.<br />
# Start the CrashPlan client. (Again, the executable is named {{ic|CrashPlanDesktop}}.)<br />
<br />
More ideas can be found on these websites:<br />
<br />
* The CrashPlan support site [http://support.code42.com/CrashPlan/Latest/Configuring/Configuring_A_Headless_Client details] a slightly more complicated method of tunneling traffic from the client (CrashPlan Desktop) to the daemon (CrashPlan Engine) through an SSH tunnel.<br />
* A [http://www.liquidstate.net/how-to-manage-your-crashplan-server-remotely/ post by Bryan Ross] details how to make CrashPlan Desktop connect directly to CrashPlan Engine. Note that this method can be less secure than tunneling traffic through an SSH tunnel.<br />
<br />
==Troubleshooting==<br />
<br />
===Waiting for connection===<br />
<br />
On some systems it can happen that CrashPlan does not wait until an internet connection is established. If using [[NetworkManager]], you can install {{AUR|networkmanager-dispatcher-crashplan-systemd}}{{Broken package link|{{aur-mirror|networkmanager-dispatcher-crashplan-systemd}}}} which will automatically restart the CrashPlan service once a connection is successfully established.<br />
<br />
===Waiting for Backup===<br />
<br />
If the backup is stuck on «Waiting for Backup» even after you engage it manually, it might be that CrashPlan cannot access the tempdir or it is mounted as {{ic|noexec}}. It uses the default Java tmp dir which is normally {{ic|/tmp}}. You can either remove the {{ic|noexec}} mount option (not recommended) or change the tmpdir CrashPlan is using.<br />
<br />
To change the tmpdir CrashPlan uses, open {{ic|/opt/crashplan/bin/run.conf}} and insert {{ic|-Djava.io.tmpdir&#61;/new-tempdir}} to {{ic|SRV_JAVA_OPTS}}, for example:<br />
<br />
SRV_JAVA_OPTS="-Djava.io.tmpdir=/var/tmp/crashplan -Dfile.encoding=UTF-8 …<br />
<br />
Make sure to create the new tmpdir and verify CrashPlan's user has access to it.<br />
# mkdir /var/tmp/crashplan<br />
<br />
Restart CrashPlan<br />
# systemctl restart crashplan<br />
<br />
===Desktop GUI Crashes on startup===<br />
<br />
On systems with Gnome 3 installed, or with libwebkit-gtk installed, there may be an issue where the GUI crashes on launch. This can be fixed by following the instructions [https://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Client_Closes_In_Some_Linux_Installations here].<br />
<br />
===Out of Memory===<br />
<br />
For backup sets containing large numbers of files (more than 100,000 or so), the default maximum heap size of 512M may be too small. If this is filled, the server will silently restart, and will usually get stuck restarting as it continually reaches the memory limit. The only sign of this happening is the creation of many small log files in {{ic|/opt/crashplan/bin}} for each service restart (potentially hundreds of thousands, depending on how long it takes to notice the problem). To increase the heap size limit, adjust the {{ic|-Xmx}} option in {{ic|/opt/crashplan/bin/run.conf}} to a reasonable value for your system.<br />
<br />
===Real time protection===<br />
If you use real time protection for your backup set and have a lot of files to backup, the default system configuration might not be able to allocate all required handle to follow all files in real time. This issue can manifest itself with logs like "inotify_add_watch: No space left on device" in the syslog journal. <br />
You can follow instruction [http://support.code42.com/CrashPlan/Latest/Troubleshooting/Real-Time_Backup_For_Network-Attached_Drives here] and configure inotify max_user_watches to a bigger value to fix the iusse. <br />
<br />
==See also==<br />
<br />
* [[Backup programs]]<br />
* [http://www.code42.com/crashplan/ CrashPlan home page]<br />
* [http://support.code42.com/CrashPlan/Latest/Configuring/Using_CrashPlan_On_A_Headless_Computer CrashPlan On A Headless Server - Code42Support]<br />
* [[Wikipedia:CrashPlan]]</div>Timemasterhttps://wiki.archlinux.org/index.php?title=ZFS&diff=402032ZFS2015-09-27T18:12:47Z<p>Timemaster: Removed swap warning, as swap bugs should have been corrected in 0.6.5. Changelog : https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.6.5</p>
<hr />
<div>[[Category:File systems]]<br />
[[ja:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
=== General ===<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. A script to build ZFS and its dependencies automatically can be found [[#Automated build script|here]].<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-git}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare_the_Devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced format disks ===<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
By default, ''zpool'' will enable all features on a pool. If {{ic|/boot}} resides on ZFS and when using [[GRUB]], you must only enable read-only, or non-read-only features supported by GRUB ({{ic|lz4_compress}} as of version 2.02.beta2). Otherwise GRUB will not be able to read the pool.<br />
<br />
# zpool create -d \<br />
-o feature@async_destroy=enabled \<br />
-o feature@empty_bpobj=enabled \<br />
-o feature@lz4_compress=enabled \<br />
-o feature@spacemap_histogram=enabled \<br />
-o feature@enabled_txg=enabled \<br />
<pool_name> <vdevs><br />
<br />
{{Tip|As of September 2015, GRUB's development tree supports {{ic|extensible_dataset}}, {{ic|hole_birth}}, {{ic|embedded_data}}, and {{ic|large_blocks}}, making it viable to use a pool with all features enabled, either at create time or by using {{ic|zpool upgrade <pool_name>}}, if {{AUR|grub-git}} is installed.}}<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PC's would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZPool creation fails ===<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[demz-repo-core]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-git}} group to the list of packages to be installed:<br />
<br />
{{hc|~/archlive/packages.both|<br />
...<br />
archzfs-git<br />
}}<br />
<br />
Complete [[Archiso#Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#demz-repo-archiso|demz-repo-archiso]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-git'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Automated build script ===<br />
<br />
{{Deletion|The wiki isn't the place to maintain massive script dumps}}<br />
<br />
The following script may be used to build ZFS and its dependencies automatically.<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
When ZFS is used as a data drive and boot support is not needed, these two shell scripts will build and remove all zfs packages. The only requirements are {{pkg|sudo}}, {{pkg|git}}, and answering a couple of prompts. On each kernel upgrade you remove ZFS with {{ic|zfsun.sh}}, update, and install ZFS with {{ic|zfsbuild.sh}}.<br />
{{hc|~/build/zfspkg/zfsbuild.sh|<nowiki><br />
#!/usr/bin/bash<br />
#<br />
# 2015-07-17 zfsbuild.sh by severach for AUR 4<br />
# 2015-08-08 AUR4 -> AUR, added git pull, safer AUR 3.5 update folder<br />
# Adapted from ZFS Builder by graysky<br />
# place this in a user home folder.<br />
# I recommend ~/build/zfspkg/. Do not name the folder 'zfs'.<br />
<br />
# 1 to add conflicts=(linux>,linux<) which offers automatic removal on upgrade.<br />
# Manual removal with zfsun.sh is preferred.<br />
_opt_AutoRemove=0<br />
_opt_ZFSPool='zfsdata'<br />
#_opt_ZFSbyid='/dev/disk/by-partlabel'<br />
_opt_ZFSbyid='/dev/disk/by-id'<br />
# '' for manual answer to prompts. --noconfirm to go ahead and do it all.<br />
_opt_AutoInstall='' #--noconfirm'<br />
<br />
# Multiprocessor compile enabled!<br />
# Huuuuuuge performance improvement. Watch in htop.<br />
# An E3-1245 can peg all 8 processors.<br />
#1 [|||||||||||||||||||||||||96.2%]<br />
#2 [|||||||||||||||||||||||||97.6%]<br />
#3 [|||||||||||||||||||||||||95.7%]<br />
#4 [|||||||||||||||||||||||||96.7%]<br />
#5 [|||||||||||||||||||||||||95.7%]<br />
#6 [|||||||||||||||||||||||||97.1%]<br />
#7 [|||||||||||||||||||||||||98.6%]<br />
#8 [|||||||||||||||||||||||||96.2%]<br />
#Mem[||| 596/31974MB]<br />
#Swp[ 0/0MB]<br />
<br />
set -u<br />
set -e<br />
<br />
if [ "${EUID}" -eq 0 ]; then<br />
echo "This script must NOT be run as root"<br />
sleep 1<br />
exit 1<br />
fi<br />
<br />
for i in 'sudo' 'git'; do<br />
command -v "${i}" >/dev/null 2>&1 || {<br />
echo "I require ${i} but it's not installed. Aborting." 1>&2<br />
exit 1; }<br />
done<br />
<br />
cd "$(dirname "$0")"<br />
OPWD="$(pwd)"<br />
for cwpackage in 'spl-utils-git' 'spl-git' 'zfs-utils-git' 'zfs-git'; do<br />
#cower -dc -f "${cwpackage}"<br />
if [ -d "${cwpackage}" -a ! -d "${cwpackage}/.git" ]; then<br />
echo "${cwpackage}: Convert AUR3.5 to AUR4"<br />
cd "${cwpackage}"<br />
git clone "https://aur.archlinux.org/${cwpackage}.git/" "${cwpackage}.temp"<br />
cd "${cwpackage}.temp"<br />
mv '.git' ..<br />
cd ..<br />
rm -rf "${cwpackage}.temp"<br />
cd ..<br />
fi<br />
if [ -d "${cwpackage}" ]; then<br />
echo "${cwpackage}: Update local copy"<br />
cd "${cwpackage}"<br />
git fetch<br />
git reset --hard 'origin/master'<br />
git pull # this line was missed in previous versions<br />
else<br />
echo "${cwpackage}: Clone to new folder"<br />
git clone "https://aur.archlinux.org/${cwpackage}.git/" <br />
cd "${cwpackage}"<br />
fi<br />
sed -i -e 's:^\s\+make$:'"& -s -j $(nproc):g" 'PKGBUILD'<br />
if [ "${_opt_AutoRemove}" -ne 0 ]; then<br />
sed -i -e 's:^conflicts=(.*$: &\n_kernelversionsmall="`uname -r | cut -d - -f 1`"\nconflicts+=("linux>${_kernelversionsmall}" "linux<${_kernelversionsmall}")\n:g' 'PKGBUILD'<br />
fi<br />
if ! makepkg -sCcfi ${_opt_AutoInstall}; then<br />
cd "${OPWD}"<br />
break<br />
fi<br />
#rm -rf 'zfs' 'spl'<br />
cd "${OPWD}"<br />
done<br />
which fsck.zfs<br />
if [ "$?" -eq 0 ]; then<br />
sudo mkinitcpio -p 'linux' # Stores fsck.zfs into the initrd image. I don't know why it would be needed.<br />
fi<br />
#sudo zpool import "${_opt_ZFSPool}" # Don't do this or zpool will mount via /dev/sd?, which you won't like!<br />
sudo zpool import -d "${_opt_ZFSbyid}" "${_opt_ZFSPool}"<br />
sudo zpool status<br />
sudo -k<br />
</nowiki>}}<br />
<br />
{{hc|~/build/zfspkg/zfsun.sh|<nowiki><br />
#!/usr/bin/bash<br />
<br />
# 2015-07-17 zfs uninstaller by severach for AUR4<br />
# Removing ZFS forgets to unmount the pools, which might be desirable if you're<br />
# running ZFS on the root file system.<br />
<br />
_opt_ZFSFolder='/home/zfsdata/foo'<br />
_opt_ZFSPool='zfsdata'<br />
<br />
if [ "${EUID}" -ne 0 ]; then<br />
echo 'Must be root, try sudo !!'<br />
sleep 1<br />
exit 1<br />
fi<br />
<br />
systemctl stop 'smbd.service' # Active shares can lock the mount. You might want to stop nfs too.<br />
zpool export "${_opt_ZFSPool}" # zpool import no longer works with drives that were zfs umount<br />
if [ ! -d "${_opt_ZFSFolder}" ]; then<br />
echo "${_opt_ZFSPool} exported"<br />
pacman -Rc 'spl-utils-git' # This works even if some are already removed.<br />
#pacman -R 'zfs-utils-git' 'spl-git' 'spl-utils-git' 'zfs-git'<br />
else<br />
echo "ZFS didn't unmount"<br />
fi<br />
systemctl start 'smbd.service'<br />
</nowiki>}}<br />
<br />
=== Bindmount ===<br />
<br />
It is not possible too bindmount a directory residing on zfs onto another directory using fstab, because the fstab is read before the zfs pool is ready. To overcome this limitation, a systemd mount unit can be used for the bind mount, as in the following example. Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The unit configuration ensures that the zfs pool is ready before the bind mount is created. The name of the mount unit must be equal to the directory mentioned after "Where", replace slashes with minuses. See [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdAndBindMounts]] and [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdBindMountUnits]] for more details.<br />
{{hc|srv-nfs4-music.mount|<nowiki><br />
[Mount]<br />
What=/mnt/zfspool<br />
Where=/srv/nfs4/music<br />
Type=none<br />
Options=bind<br />
<br />
[Unit]<br />
DefaultDependencies=no<br />
Conflicts=umount.target<br />
Before=local-fs.target umount.target<br />
After=zfs-mount.service<br />
Requires=zfs-mount.service<br />
ConditionPathIsDirectory=/mnt/zfspool<br />
<br />
[Install]<br />
WantedBy=local-fs.target<br />
</nowiki>}}<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Timemasterhttps://wiki.archlinux.org/index.php?title=Talk:ZFS&diff=380914Talk:ZFS2015-07-04T16:58:17Z<p>Timemaster: Now that I added the warning to the main page, I am removing this question here.</p>
<hr />
<div></div>Timemasterhttps://wiki.archlinux.org/index.php?title=ZFS&diff=380913ZFS2015-07-04T16:53:54Z<p>Timemaster: /* Swap volume */</p>
<hr />
<div>[[Category:File systems]]<br />
[[ja:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
=== General ===<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. A script to build ZFS and its dependencies automatically can be found [[#Automated build script|here]].<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-git}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare_the_Devices)]] }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives 1.15 How does ZFS on Linux handles Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced format disks ===<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
By default, ''zpool'' will enable all features on a pool. If {{ic|/boot}} resides on ZFS and when using [[GRUB]], you must only enable read-only, or non-read-only features supported by GRUB ({{ic|lz4_compress}} as of version 2.02.beta2). Otherwise GRUB will not be able to read the pool.<br />
<br />
# zpool create -d \<br />
-o feature@async_destroy=enabled \<br />
-o feature@empty_bpobj=enabled \<br />
-o feature@lz4_compress=enabled \<br />
-o feature@spacemap_histogram=enabled \<br />
-o feature@enabled_txg=enabled \<br />
<pool_name> <vdevs><br />
<br />
{{Tip|As of June 2015, GRUB's development tree supports {{ic|extensible_dataset}}, {{ic|hole_birth}}, and {{ic|embedded_data}}, making it viable to use a pool with all features enabled, either at create time or by using {{ic|zpool upgrade <pool_name>}}, if {{AUR|grub-git}} is installed.}}<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PC's would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
<br />
zvols can suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a zvol gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a zvol, requiring 2× or more physical storage capacity than the zvol's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the zvol data.<br />
<br />
{{warning|While ZVOL create a block device and allow you to use it as swap, swap on zvols can deadlock and should be avoided. See [https://clusterhq.com/2014/09/11/zfs-on-linux-runtime-stability/ ZFS on Linux Runtime Stability] and [https://github.com/zfsonlinux/zfs/issues/1274 ZFS Swap Lockup]}}<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZPool creation fails ===<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository and add the {{ic|archzfs-git}} group to the list of packages to be installed:<br />
$ echo archzfs-git >> ~/archlive/packages.both<br />
<br />
Complete [[Archiso#Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#demz-repo-archiso|demz-repo-archiso]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-git'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Automated build script ===<br />
<br />
The following script may be used to build ZFS and its dependencies automatically.<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
=== Bindmount ===<br />
<br />
It is not possible too bindmount a directory residing on zfs onto another directory using fstab, because the fstab is read before the zfs pool is ready. To overcome this limitation, a systemd mount unit can be used for the bind mount, as in the following example. Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The unit configuration ensures that the zfs pool is ready before the bind mount is created. The name of the mount unit must be equal to the directory mentioned after "Where", replace slashes with minuses. See [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdAndBindMounts]] and [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdBindMountUnits]] for more details.<br />
{{hc|srv-nfs4-music.mount|<nowiki><br />
[Mount]<br />
What=/mnt/zfspool<br />
Where=/srv/nfs4/music<br />
Type=none<br />
Options=bind<br />
<br />
[Unit]<br />
DefaultDependencies=no<br />
Conflicts=umount.target<br />
Before=local-fs.target umount.target<br />
After=zfs-mount.service<br />
Requires=zfs-mount.service<br />
ConditionPathIsDirectory=/mnt/zfspool<br />
<br />
[Install]<br />
WantedBy=local-fs.target<br />
</nowiki>}}<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Timemasterhttps://wiki.archlinux.org/index.php?title=Install_Arch_Linux_on_ZFS&diff=380804Install Arch Linux on ZFS2015-07-03T18:24:16Z<p>Timemaster: /* For BIOS motherboards */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[ja:ZFS に Arch Linux をインストール]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
This article details the steps required to install Arch Linux onto a root ZFS filesystem. This article supplements the [[Beginners' guide]].<br />
<br />
== Installation ==<br />
<br />
See [[ZFS#Installation]] for installing the ZFS packages. If installing Arch Linux onto ZFS from the archiso, it would be easier to use the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
=== Embedding archzfs into archiso ===<br />
<br />
See [[ZFS#Embed_the_archzfs_packages_into_an_archiso|ZFS]] article.<br />
<br />
== Partition the destination drive ==<br />
<br />
Review [[Beginners' guide#Prepare the storage drive]] for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.<br />
<br />
ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type {{ic|bf00}}, or "Solaris Root".<br />
<br />
=== Partition scheme ===<br />
<br />
Here is an example, using MBR, of a basic partition scheme that could be employed for your ZFS root setup:<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
Here is an example using GPT. The BIOS boot partition contains the bootloader.<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 2M BIOS boot partition (ef02)<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
An additional partition may be required depending on your hardware and chosen bootloader. Consult [[Beginners' guide#Install and configure a bootloader]] for more info.<br />
<br />
{{Tip|Bootloaders with support for ZFS are described in [[#Install and configure the bootloader]].}}<br />
{{Warning|Several GRUB bugs ([https://savannah.gnu.org/bugs/?42861 bug #42861], [https://github.com/zfsonlinux/grub/issues/5 zfsonlinux/grub/issues/5]) prevent or complicate installing it on ZFS partitions, use of a separate boot partition is recommended}}<br />
<br />
== Format the destination disk ==<br />
<br />
Format the boot partition as well as any other system partitions. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.<br />
<br />
== Setup the ZFS filesystem ==<br />
<br />
First, make sure the ZFS modules are loaded,<br />
<br />
# modprobe zfs<br />
<br />
=== Create the root zpool ===<br />
<br />
# zpool create zroot /dev/disk/by-id/''id-to-partition''<br />
<br />
{{Warning|<br />
* Always use id names when working with ZFS, otherwise import errors will occur.<br />
* The zpool command will normally activate all features. See [[ZFS#GRUB-compatible pool creation]] when using [[GRUB]].}}<br />
<br />
=== Create necessary filesystems ===<br />
<br />
If so desired, sub-filesystem mount points such as {{ic|/home}} and {{ic|/root}} can be created with the following commands:<br />
<br />
# zfs create zroot/home -o mountpoint=/home<br />
# zfs create zroot/root -o mountpoint=/root<br />
<br />
Note that if you want to use other datasets for system directories ({{ic|/var}} or {{ic|/etc}} included) your system will not boot unless they are listed in {{ic|/etc/fstab}}! We will address that at the appropriate time in this tutorial.<br />
<br />
=== Swap partition ===<br />
<br />
See [[ZFS#Swap volume]].<br />
<br />
=== Configure the root filesystem ===<br />
<br />
First, set the mount point of the root filesystem:<br />
<br />
# zfs set mountpoint=/ zroot<br />
<br />
and optionally, any sub-filesystems:<br />
<br />
# zfs set mountpoint=/home zroot/home<br />
# zfs set mountpoint=/root zroot/root<br />
<br />
and if you have separate datasets for system directories (ie {{ic|/var}} or {{ic|/usr}})<br />
<br />
# zfs set mountpoint=legacy zroot/usr<br />
# zfs set mountpoint=legacy zroot/var<br />
<br />
and put them in {{ic|/etc/fstab}}<br />
{{hc|/etc/fstab|<br />
# <file system> <dir> <type> <options> <dump> <pass><br />
zroot/usr /usr zfs defaults,noatime 0 0<br />
zroot/var /var zfs defaults,noatime,acl 0 0}}<br />
<br />
Note that the {{ic|/var}} filesystem requires the [[Systemd#systemd-tmpfiles-setup.service_fails_to_start_at_boot|Access Control List enabled (acl)]] that in zfs it is disabled by default. To enable it use {{ic|zfs set}}:<br />
<br />
# zfs set xattr=sa zroot/var<br />
# zfs set acltype=posixacl zroot/var<br />
<br />
The property {{ic|xattr&#61;sa}} is not mandatory, but suggested. Check {{ic|man zfs}} for all details.<br />
<br />
Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.<br />
<br />
# zpool set bootfs=zroot zroot<br />
<br />
Export the pool,<br />
<br />
# zpool export zroot<br />
<br />
{{Warning|Do not skip this, otherwise you will be required to use {{ic|-f}} when importing your pools. This unloads the imported pool.}}<br />
{{Note|This might fail if you added a swap partition above. Need to turn it off with the ''swapoff'' command.}}<br />
<br />
Finally, re-import the pool,<br />
<br />
# zpool import -d /dev/disk/by-id -R /mnt zroot<br />
<br />
{{Note|{{ic|-d}} is not the actual device id, but the {{ic|/dev/by-id}} directory containing the symbolic links.}}<br />
<br />
If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.<br />
<br />
Be sure to bring the zpool.cache file into your new system. This is required later for the ZFS daemon to start.<br />
<br />
# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache<br />
<br />
if you do not have /etc/zfs/zpool.cache, create it:<br />
<br />
# zpool set cachefile=/etc/zfs/zpool.cache zroot<br />
<br />
== Install and configure Arch Linux ==<br />
<br />
Follow the following steps using the [[Beginners' guide]]. It will be noted where special consideration must be taken for ZFSonLinux.<br />
<br />
* First mount any boot or system partitions using the mount command.<br />
<br />
* Install the base system.<br />
<br />
* The procedure described in [[Beginners' guide#Generate an fstab]] is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in {{ic|fstab}} file, unless the user made datasets of system directories. To generate the {{ic|fstab}} for filesystems, use:<br />
# genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab<br />
<br />
* Edit the {{ic|/etc/fstab}}:<br />
<br />
{{Note|<br />
* If you chose to create datasets for system directories, keep them in this {{ic|fstab}}! Comment out the lines for the '{{ic|/}}, {{ic|/root}}, and {{ic|/home}} mountpoints, rather than deleting them. You may need those UUIDs later if something goes wrong.<br />
* Anyone who just stuck with the guide's directions can delete everything except for the swap file and the boot/EFI partition. It seems convention to replace the swap's uuid with {{ic|/dev/zvol/zroot/swap}}.<br />
}}<br />
<br />
* When creating the initial ramdisk, first edit {{ic|/etc/mkinitcpio.conf}} and add {{ic|zfs}} before filesystems. Also, move {{ic|keyboard}} hook before {{ic|zfs}} so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your {{ic|HOOKS}} line should look something like this:<br />
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"<br />
<br />
* Regenerate the initramfs with the command:<br />
# mkinitcpio -p linux<br />
<br />
== Install and configure the bootloader ==<br />
<br />
=== For BIOS motherboards ===<br />
<br />
Follow [[GRUB#BIOS systems 2]] to install GRUB onto your disk. {{ic|grub-mkconfig}} does not properly detect the ZFS filesystem, so it is necessary to edit {{ic|grub.cfg}} manually:<br />
<br />
{{hc|/boot/grub/grub.cfg|<nowiki><br />
set timeout=2<br />
set default=0<br />
<br />
# (0) Arch Linux<br />
menuentry "Arch Linux" {<br />
search --no-floppy --label --set=root zroot<br />
linux /vmlinuz-linux zfs=zroot rw<br />
initrd /initramfs-linux.img<br />
}<br />
</nowiki>}}<br />
<br />
if you did not create a separate /boot participation, kernel and initrd paths have to be in the following format:<br />
<br />
/dataset/@/actual/path <br />
<br />
Example with Arch installed on the main dataset :<br />
<br />
linux /@/boot/vmlinuz-linux zfs=zroot rw<br />
initrd /@/boot/initramfs-linux.img<br />
<br />
Example with Arch installed on separator dataset zroot/OS/root<br />
<br />
linux /OS/root/@/boot/vmlinuz-linux zfs=zroot/OS/root rw <br />
initrd /OS/root/@/boot/initramfs-linux.img<br />
<br />
=== For UEFI motherboards ===<br />
<br />
Use {{ic|EFISTUB}} and {{ic|rEFInd}} for the UEFI boot loader. See [[Beginners' guide#For UEFI motherboards]]. The kernel parameters in {{ic|refind_linux.conf}} for ZFS should include {{ic|1=zfs=bootfs}} or {{ic|1=zfs=zroot}} so the system can boot from ZFS. The {{ic|root}} and {{ic|rootfstype}} parameters are not needed.<br />
<br />
== Unmount and restart ==<br />
<br />
We are almost done!<br />
# exit<br />
# umount /mnt/boot<br />
# zfs umount -a<br />
# zpool export zroot<br />
Now reboot.<br />
<br />
{{Warning|If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.}}<br />
<br />
== After the first boot ==<br />
<br />
If everything went fine up to this point, your system will boot. Once.<br />
For your system to be able to reboot without issues, you need to enable the {{ic|zfs.target}} to auto mount the pools and set the hostid.<br />
<br />
For each pool you want automatically mounted execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
Enable the target with [[systemd]]:<br />
# systemctl enable zfs.target<br />
<br />
When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding {{ic|<nowiki>spl.spl_hostid=0x00bab10c</nowiki>}}, to get your number use the {{ic|hostid}} command.<br />
<br />
The other, and suggested, solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image. To do write the hostid file safely you need to use a small C program:<br />
<br />
#include <stdio.h><br />
#include <errno.h><br />
#include <unistd.h><br />
<br />
int main() {<br />
int res;<br />
res = sethostid(gethostid());<br />
if (res != 0) {<br />
switch (errno) {<br />
case EACCES:<br />
fprintf(stderr, "Error! No permission to write the"<br />
" file used to store the host ID.\n"<br />
"Are you root?\n");<br />
break;<br />
case EPERM:<br />
fprintf(stderr, "Error! The calling process's effective"<br />
" user or group ID is not the same as"<br />
" its corresponding real ID.\n");<br />
break;<br />
default:<br />
fprintf(stderr, "Unknown error.\n");<br />
}<br />
return 1;<br />
}<br />
return 0;<br />
}<br />
<br />
Copy it, save it as {{ic|writehostid.c}} and compile it with {{ic|gcc -o writehostid writehostid.c}}, finally execute it and regenerate the initramfs image:<br />
<br />
# ./writehostid<br />
# mkinitcpio -p linux<br />
<br />
You can now delete the two files {{ic|writehostid.c}} and {{ic|writehostid}}. Your system should work and reboot properly now.<br />
<br />
== See also ==<br />
<br />
* [https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem HOWTO install Ubuntu to a Native ZFS Root]<br />
* [http://lildude.co.uk/zfs-cheatsheet ZFS cheatsheet]<br />
* [http://www.funtoo.org/wiki/ZFS_Install_Guide Funtoo ZFS install guide]</div>Timemasterhttps://wiki.archlinux.org/index.php?title=Install_Arch_Linux_on_ZFS&diff=380803Install Arch Linux on ZFS2015-07-03T18:23:20Z<p>Timemaster: Improved grub script :</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[ja:ZFS に Arch Linux をインストール]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
This article details the steps required to install Arch Linux onto a root ZFS filesystem. This article supplements the [[Beginners' guide]].<br />
<br />
== Installation ==<br />
<br />
See [[ZFS#Installation]] for installing the ZFS packages. If installing Arch Linux onto ZFS from the archiso, it would be easier to use the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
=== Embedding archzfs into archiso ===<br />
<br />
See [[ZFS#Embed_the_archzfs_packages_into_an_archiso|ZFS]] article.<br />
<br />
== Partition the destination drive ==<br />
<br />
Review [[Beginners' guide#Prepare the storage drive]] for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.<br />
<br />
ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type {{ic|bf00}}, or "Solaris Root".<br />
<br />
=== Partition scheme ===<br />
<br />
Here is an example, using MBR, of a basic partition scheme that could be employed for your ZFS root setup:<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
Here is an example using GPT. The BIOS boot partition contains the bootloader.<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 2M BIOS boot partition (ef02)<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
An additional partition may be required depending on your hardware and chosen bootloader. Consult [[Beginners' guide#Install and configure a bootloader]] for more info.<br />
<br />
{{Tip|Bootloaders with support for ZFS are described in [[#Install and configure the bootloader]].}}<br />
{{Warning|Several GRUB bugs ([https://savannah.gnu.org/bugs/?42861 bug #42861], [https://github.com/zfsonlinux/grub/issues/5 zfsonlinux/grub/issues/5]) prevent or complicate installing it on ZFS partitions, use of a separate boot partition is recommended}}<br />
<br />
== Format the destination disk ==<br />
<br />
Format the boot partition as well as any other system partitions. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.<br />
<br />
== Setup the ZFS filesystem ==<br />
<br />
First, make sure the ZFS modules are loaded,<br />
<br />
# modprobe zfs<br />
<br />
=== Create the root zpool ===<br />
<br />
# zpool create zroot /dev/disk/by-id/''id-to-partition''<br />
<br />
{{Warning|<br />
* Always use id names when working with ZFS, otherwise import errors will occur.<br />
* The zpool command will normally activate all features. See [[ZFS#GRUB-compatible pool creation]] when using [[GRUB]].}}<br />
<br />
=== Create necessary filesystems ===<br />
<br />
If so desired, sub-filesystem mount points such as {{ic|/home}} and {{ic|/root}} can be created with the following commands:<br />
<br />
# zfs create zroot/home -o mountpoint=/home<br />
# zfs create zroot/root -o mountpoint=/root<br />
<br />
Note that if you want to use other datasets for system directories ({{ic|/var}} or {{ic|/etc}} included) your system will not boot unless they are listed in {{ic|/etc/fstab}}! We will address that at the appropriate time in this tutorial.<br />
<br />
=== Swap partition ===<br />
<br />
See [[ZFS#Swap volume]].<br />
<br />
=== Configure the root filesystem ===<br />
<br />
First, set the mount point of the root filesystem:<br />
<br />
# zfs set mountpoint=/ zroot<br />
<br />
and optionally, any sub-filesystems:<br />
<br />
# zfs set mountpoint=/home zroot/home<br />
# zfs set mountpoint=/root zroot/root<br />
<br />
and if you have separate datasets for system directories (ie {{ic|/var}} or {{ic|/usr}})<br />
<br />
# zfs set mountpoint=legacy zroot/usr<br />
# zfs set mountpoint=legacy zroot/var<br />
<br />
and put them in {{ic|/etc/fstab}}<br />
{{hc|/etc/fstab|<br />
# <file system> <dir> <type> <options> <dump> <pass><br />
zroot/usr /usr zfs defaults,noatime 0 0<br />
zroot/var /var zfs defaults,noatime,acl 0 0}}<br />
<br />
Note that the {{ic|/var}} filesystem requires the [[Systemd#systemd-tmpfiles-setup.service_fails_to_start_at_boot|Access Control List enabled (acl)]] that in zfs it is disabled by default. To enable it use {{ic|zfs set}}:<br />
<br />
# zfs set xattr=sa zroot/var<br />
# zfs set acltype=posixacl zroot/var<br />
<br />
The property {{ic|xattr&#61;sa}} is not mandatory, but suggested. Check {{ic|man zfs}} for all details.<br />
<br />
Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.<br />
<br />
# zpool set bootfs=zroot zroot<br />
<br />
Export the pool,<br />
<br />
# zpool export zroot<br />
<br />
{{Warning|Do not skip this, otherwise you will be required to use {{ic|-f}} when importing your pools. This unloads the imported pool.}}<br />
{{Note|This might fail if you added a swap partition above. Need to turn it off with the ''swapoff'' command.}}<br />
<br />
Finally, re-import the pool,<br />
<br />
# zpool import -d /dev/disk/by-id -R /mnt zroot<br />
<br />
{{Note|{{ic|-d}} is not the actual device id, but the {{ic|/dev/by-id}} directory containing the symbolic links.}}<br />
<br />
If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.<br />
<br />
Be sure to bring the zpool.cache file into your new system. This is required later for the ZFS daemon to start.<br />
<br />
# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache<br />
<br />
if you do not have /etc/zfs/zpool.cache, create it:<br />
<br />
# zpool set cachefile=/etc/zfs/zpool.cache zroot<br />
<br />
== Install and configure Arch Linux ==<br />
<br />
Follow the following steps using the [[Beginners' guide]]. It will be noted where special consideration must be taken for ZFSonLinux.<br />
<br />
* First mount any boot or system partitions using the mount command.<br />
<br />
* Install the base system.<br />
<br />
* The procedure described in [[Beginners' guide#Generate an fstab]] is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in {{ic|fstab}} file, unless the user made datasets of system directories. To generate the {{ic|fstab}} for filesystems, use:<br />
# genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab<br />
<br />
* Edit the {{ic|/etc/fstab}}:<br />
<br />
{{Note|<br />
* If you chose to create datasets for system directories, keep them in this {{ic|fstab}}! Comment out the lines for the '{{ic|/}}, {{ic|/root}}, and {{ic|/home}} mountpoints, rather than deleting them. You may need those UUIDs later if something goes wrong.<br />
* Anyone who just stuck with the guide's directions can delete everything except for the swap file and the boot/EFI partition. It seems convention to replace the swap's uuid with {{ic|/dev/zvol/zroot/swap}}.<br />
}}<br />
<br />
* When creating the initial ramdisk, first edit {{ic|/etc/mkinitcpio.conf}} and add {{ic|zfs}} before filesystems. Also, move {{ic|keyboard}} hook before {{ic|zfs}} so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your {{ic|HOOKS}} line should look something like this:<br />
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"<br />
<br />
* Regenerate the initramfs with the command:<br />
# mkinitcpio -p linux<br />
<br />
== Install and configure the bootloader ==<br />
<br />
=== For BIOS motherboards ===<br />
<br />
Follow [[GRUB#BIOS systems 2]] to install GRUB onto your disk. {{ic|grub-mkconfig}} does not properly detect the ZFS filesystem, so it is necessary to edit {{ic|grub.cfg}} manually:<br />
<br />
{{hc|/boot/grub/grub.cfg|<nowiki><br />
set timeout=2<br />
set default=0<br />
<br />
# (0) Arch Linux<br />
menuentry "Arch Linux" {<br />
search --no-floppy --label --set=root zroot<br />
linux /vmlinuz-linux zfs=zroot rw<br />
initrd /initramfs-linux.img<br />
}<br />
</nowiki>}}<br />
<br />
if you did not create a separate /boot participation, kernel and initrd paths have to be in the following format:<br />
<br />
/dataset/@/actual/path <br />
<br />
Example with Arch installed on the main dataset :<br />
<br />
linux /@/boot/vmlinuz-linux zfs=zroot rw<br />
initrd /@/boot/initramfs-linux.img<br />
<br />
Example with Arch installed on separator dataset rpool/OS/root<br />
<br />
linux /OS/root/@/boot/vmlinuz-linux zfs=zroot/OS/root rw <br />
initrd /OS/root/@/boot/initramfs-linux.img<br />
<br />
=== For UEFI motherboards ===<br />
<br />
Use {{ic|EFISTUB}} and {{ic|rEFInd}} for the UEFI boot loader. See [[Beginners' guide#For UEFI motherboards]]. The kernel parameters in {{ic|refind_linux.conf}} for ZFS should include {{ic|1=zfs=bootfs}} or {{ic|1=zfs=zroot}} so the system can boot from ZFS. The {{ic|root}} and {{ic|rootfstype}} parameters are not needed.<br />
<br />
== Unmount and restart ==<br />
<br />
We are almost done!<br />
# exit<br />
# umount /mnt/boot<br />
# zfs umount -a<br />
# zpool export zroot<br />
Now reboot.<br />
<br />
{{Warning|If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.}}<br />
<br />
== After the first boot ==<br />
<br />
If everything went fine up to this point, your system will boot. Once.<br />
For your system to be able to reboot without issues, you need to enable the {{ic|zfs.target}} to auto mount the pools and set the hostid.<br />
<br />
For each pool you want automatically mounted execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
Enable the target with [[systemd]]:<br />
# systemctl enable zfs.target<br />
<br />
When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding {{ic|<nowiki>spl.spl_hostid=0x00bab10c</nowiki>}}, to get your number use the {{ic|hostid}} command.<br />
<br />
The other, and suggested, solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image. To do write the hostid file safely you need to use a small C program:<br />
<br />
#include <stdio.h><br />
#include <errno.h><br />
#include <unistd.h><br />
<br />
int main() {<br />
int res;<br />
res = sethostid(gethostid());<br />
if (res != 0) {<br />
switch (errno) {<br />
case EACCES:<br />
fprintf(stderr, "Error! No permission to write the"<br />
" file used to store the host ID.\n"<br />
"Are you root?\n");<br />
break;<br />
case EPERM:<br />
fprintf(stderr, "Error! The calling process's effective"<br />
" user or group ID is not the same as"<br />
" its corresponding real ID.\n");<br />
break;<br />
default:<br />
fprintf(stderr, "Unknown error.\n");<br />
}<br />
return 1;<br />
}<br />
return 0;<br />
}<br />
<br />
Copy it, save it as {{ic|writehostid.c}} and compile it with {{ic|gcc -o writehostid writehostid.c}}, finally execute it and regenerate the initramfs image:<br />
<br />
# ./writehostid<br />
# mkinitcpio -p linux<br />
<br />
You can now delete the two files {{ic|writehostid.c}} and {{ic|writehostid}}. Your system should work and reboot properly now.<br />
<br />
== See also ==<br />
<br />
* [https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem HOWTO install Ubuntu to a Native ZFS Root]<br />
* [http://lildude.co.uk/zfs-cheatsheet ZFS cheatsheet]<br />
* [http://www.funtoo.org/wiki/ZFS_Install_Guide Funtoo ZFS install guide]</div>Timemasterhttps://wiki.archlinux.org/index.php?title=Talk:ZFS&diff=335364Talk:ZFS2014-09-13T17:49:11Z<p>Timemaster: /* Swap on zfs */ new section</p>
<hr />
<div>== Swap on zfs ==<br />
<br />
According to a recent news by a top contributor on zfs on linux, swap on zfs have stability issue. <br />
https://clusterhq.com/blog/zfs-on-linux-runtime-stability/<br />
<br />
Shoud we mark a note in the wiki against it ?</div>Timemasterhttps://wiki.archlinux.org/index.php?title=LVM&diff=283994LVM2013-11-21T22:23:15Z<p>Timemaster: </p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Article summary start}}<br />
{{Article summary text|This article will provide an example of how to install and configure Arch Linux with Logical Volume Manager (LVM).}}<br />
{{Article summary heading|Required software}}<br />
{{Article summary text|{{pkg|lvm2}}}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|Software RAID and LVM}}<br />
{{Article summary wiki|System Encryption with LUKS}}<br />
{{Article summary wiki|Encrypted LVM}}<br />
{{Article summary end}}<br />
<br />
== Introduction ==<br />
<br />
{{Wikipedia|Logical Volume Manager (Linux)}}<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions that is independent of the underlying disk's layout. With LVM you can abstract your storage space and have "virtual partitions" which makes it easier to extend and shrink partitions (subject to the filesystem you use allowing this) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way. This is strictly an ease-of-management issue: it does not provide any additional security. However, it sits nicely with the other two technologies we are using.<br />
<br />
The basic building blocks of LVM are:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even hard disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks which can be used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes that are used as storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': A small part of a disk (usually 4MiB) that can be assigned to a logical Volume. Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get more filled.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. Filesystems on them still need to be resized, but some support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the filesystem, while keeping service downtime to a minimum.<br />
<br />
These can be very helpful in a server situation, desktop less so, but you must decide if the features are worth the abstraction.<br />
<br />
=== Disadvantages ===<br />
<br />
* Linux exclusive (almost). There is no official support in most other OS (FreeBSD, Windows..).<br />
* Additional steps in setting up the system, more complicated.<br />
* If you use the [[btrfs]] filesystem, its Subvolume feature will also give you the benefit of having a flexible layout. In that case, using the additional Abstraction layer of LVM may be unnecessary.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[Partitioning]] and [[File_Systems#Step_2:_create_the_new_file_system|mkfs]] steps of the Installation Procedure. Instead of directly formating a partition to be your root filesystem, it will be created inside a logical volume (LV). <br />
<br />
Quick overview: <br />
* Create partition(s) where your PV will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PV). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all the PV to it.<br />
* Create logical volumes (LV) inside your VG.<br />
* Continue with “Format the partitions” step of Beginners Guide.<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the lvm hook to mkinitcpio.conf (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate /boot partition and format it directly. }}<br />
<br />
=== Create physical volumes ===<br />
<br />
Make sure you target the right partitions! To find the partitions with type 'Linux LVM':<br />
* MBR system: {{Ic|fdisk -l}}<br />
* GPT system: {{Ic|lsblk}} and then {{Ic|gdisk -l ''disk-device''}}<br />
<br />
Create a physical volume on them:<br />
# pvcreate ''disk-device''<br />
''disk-device'' may be e.g. /dev/sda2.<br />
This command creates a header on each partition so it can be used for LVM.<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD use {{ic|pvcreate --dataalignment 1m /dev/sda2}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
Next step is to create a volume group on this physical volume. First you need to create a volume group on one of the new partitions and then add to it all other physical volumes you want to have in it:<br />
# vgcreate VolGroup00 /dev/sda2<br />
# vgextend VolGroup00 /dev/sdb1<br />
Also you can use any other name you like instead of VolGroup00 for a volume group when creating it. You can track how your volume group grows with:<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
To create swap on a logical volume, an additional argument is needed:<br />
# lvcreate -C y -L 10G VolGroup00 -n lvolswap<br />
The {{Ic|-C y}} is used to create a contiguous partition, which means that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents.<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
# lvcreate -l +100%FREE VolGroup00 -n lvolmedia<br />
<br />
You can track created logical volumes with:<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed:}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create filesystems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
Now you can create filesystems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' Guide#Mount the partitions|mounting the partitions]] for additional details):<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You'll need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystem}} like so:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>HOOKS="base udev ... block lvm2 filesystems"</nowiki>}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow logical volume ===<br />
<br />
To grow a logical volume you first need to grow the logical volume and then the filesystem to use the newly created free space. Let us say we have a logical volume of 15GB with ext3 on it and we want to grow it to 20G. We need to do the following steps: <br />
# lvextend -L 20G VolGroup00/lvolhome (or lvresize -L +5G VolGroup00/lvolhome)<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
You may use {{Ic|lvresize}} instead of {{Ic|lvextend}}.<br />
<br />
If you want to fill all the free space on a volume group, use the next command:<br />
# lvextend -l +100%FREE VolGroup00/lvolhome<br />
<br />
{{Warning|Not all filesystems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your filesystem, you will still have a volume with the same size as before (volume will be bigger but partly unused).}}<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your filesystem is probably as big as the logical volume it resides on, you need to shrink the filesystem first and then shrink the logical volume. Depending on your filesystem, you may need to unmount it first. Let us say we have a logical volume of 15GB with ext3 on it and we want to shrink it to 10G. We need to do the following steps: <br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
Here we shrunk the filesystem more than needed so that when we shrunk the logical volume we did not accidentally cut off the end of the filesystem. After that we normally grow the filesystem to fill all free space left on logical volume. You may use {{Ic|lvresize}} instead of {{Ic|lvreduce}}.<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the filesystem size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all filesystems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
{{Note|It is better to reduce the filesystem to a smaller size than the logical volume, so that after resizing the logical volume, we do not accidentally cut off some data from the end of the filesystem.}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else, otherwise it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes installed on the system with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint for your chosen logical volume...:<br />
<br />
$ df -h<br />
<br />
... and unmount it:<br />
<br />
# umount /your_mountpoint<br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove /dev/yourVG/yourLV<br />
<br />
Confirm by typing {{ic|y}} and you are done.<br />
<br />
Do not forget, to update {{ic|/etc/fstab}}!<br />
<br />
You can verify the removal of your logical volume by typing "lvs" as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a filesystem to make backups; a backup taking two hours provides a more consistent image of the filesystem than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root filesystem snapshots during system startup<br />
for backup and rollback.<br />
<br />
[[Encrypted_LVM]]<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: (assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://www.gentoo.org/doc/en/lvm2.xml Gentoo LVM2 installation] guide at Gentoo documentation<br />
* [http://en.gentoo-wiki.com/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Timemasterhttps://wiki.archlinux.org/index.php?title=Install_Arch_Linux_with_Fake_RAID&diff=283471Install Arch Linux with Fake RAID2013-11-18T05:10:12Z<p>Timemaster: </p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[pt:Installing with Fake RAID]]<br />
[[zh-CN:Installing with Fake RAID]]<br />
{{Article summary start}}<br />
{{Article summary text|Provides detailed instructions for installing Arch Linux on "fake RAID" volumes. This guide is intended to supplement the [[Official Arch Linux Install Guide]] or the [[Beginners' Guide]].}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|Installing with Software RAID or LVM}}<br />
{{Article summary wiki|Convert a single drive system to RAID}}<br />
{{Article summary heading|Resources}}<br />
{{Article summary link|Related forum thread|2=https://bbs.archlinux.org/viewtopic.php?id=22038}}<br />
{{Article summary end}}<br />
<br />
The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of Linux and Windows from partitions '''inside''' the RAID set using GRUB. When using so-called "fake RAID" or "host RAID", the disc sets are reached from {{ic|/dev/mapper/chipsetName_randomName}} and not {{ic|/dev/sdX}}.<br />
<br />
== What is "fake RAID" ==<br />
<br />
From Wikipedia:<br />
<br />
:''Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded, the drivers take over.''<br />
<br />
:''These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit -- not the RAID controller itself -- thus introducing the aforementioned CPU overhead which hardware controllers do not suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards). Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "host RAID".''[[wikipedia:RAID]]<br />
<br />
See [[Wikipedia:RAID]] or [https://help.ubuntu.com/community/FakeRaidHowto FakeRaidHowto @ Community Ubuntu Documentation] for more information.<br />
<br />
Despite the terminology, "fake RAID" via {{Pkg|dmraid}} is a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure '''before''' the system is ever booted.<br />
<br />
== History ==<br />
<br />
In Linux 2.4, the ATARAID kernel framework provided support for fake RAID (software RAID assisted by the BIOS). For Linux 2.6 the device-mapper framework can, among other nice things like [[LVM]] and EVMS, do the same kind of work as ATARAID in 2.4. Whilst the new code handling the RAID I/O still runs in the kernel, device-mapper is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.<br />
<br />
Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) fake RAID IDE/SATA controllers which contain BIOS functions. Common examples include: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; and NVIDIA nForce.<br />
<br />
== Supported hardware ==<br />
<br />
* Tested with ICH10R on ''2009.08'' (x86_64) -- [[User:Pointone|pointone]] 23:10, 29 November 2009 (EST)<br />
* Tested with Sil3124 on ''2009.02'' (i686) -- [[User:Loosec|loosec]]<br />
* Tested with nForce4 on ''Core Dump'' (i686 and x86_64) -- [[User:Loosec|loosec]]<br />
* Tested with Sil3512 on ''Overlord'' (x86_64) -- [[User:Loosec|loosec]]<br />
* Tested with nForce2 on ''2011.05'' (i686) -- [[User:Jere2001|Jere2001]]; [[User:drankinatty|drankinatty]]<br />
* Tested with nVidia MCP78S on ''2011.06'' (x86_64) -- [[User:drankinatty|drankinatty]]<br />
* Tested with nVidia CK804 on ''2011.06'' (x86_64) -- [[User:drankinatty|drankinatty]]<br />
* Tested with AMD Option ROM Utility using pdc_adma on ''2011.12'' (x86_64)<br />
<br />
{{Out of date|The installation steps do not reflect the current ArchLinux installation procedure. Need to be updated. Btw, it appears that Intel now recommends mdadm instead of dmraid (see Discussion). Update in progress.}}<br />
<br />
== Preparation ==<br />
{{Warning|Backup all data before playing with RAID. What you do with your hardware is only your own fault. Data on RAID stripes is highly vulnerable to disc failures. Create regular backups or consider using mirror sets. '''Consider yourself warned!'''}}<br />
<br />
*Open up any needed guides (e.g. [[Beginners' Guide]], [[Official Arch Linux Install Guide]]) on another machine. If you do not have access to another machine, print it out.<br />
*Download the latest Arch Linux install image.<br />
*Backup all important files since everything on the target partitions will be destroyed.<br />
<br />
=== Configure RAID sets ===<br />
<br />
{{Warning|If your drives are not already configured as RAID and Windows is already installed, switching to "RAID" may cause Windows to BSOD during boot.[http://support.microsoft.com/kb/316401/]}}<br />
<br />
*Enter your BIOS setup and enable the RAID controller.<br />
**The BIOS may contain an option to configure SATA drives as "IDE", "AHCI", or "RAID"; ensure "RAID" is selected.<br />
*Save and exit the BIOS setup. During boot, enter the RAID setup utility.<br />
** The RAID utility is usually either accessible via the boot menu (often F8, F10 or CTRL+I) or whilst the RAID controller is initializing.<br />
*Use the RAID setup utility to create preferred stripe/mirror sets.<br />
<br />
{{Tip|See your motherboard documentation for details. The exact procedure may vary.}}<br />
<br />
== Boot the installer ==<br />
<br />
See [[Official Arch Linux Install Guide#Pre-Installation]] for details.<br />
<br />
== Load dmraid ==<br />
<br />
Load device-mapper and find RAID sets:<br />
<br />
# modprobe dm_mod<br />
# dmraid -ay<br />
# ls -la /dev/mapper/<br />
<br />
{{Warning| Command "dmraid -ay" could fail after boot to Arch linux Release: 2011.08.19 as image file with initial ramdisk environment does not support dmraid. You could use an older Release: 2010.05. Note that you must correct your kernel name and initrd name in grubs menu.lst after installing as these releases use different naming}}<br />
<br />
Example output:<br />
<br />
/dev/mapper/control <- Created by device-mapper; if present, device-mapper is likely functioning<br />
/dev/mapper/sil_aiageicechah <- A RAID set on a Silicon Image SATA RAID controller<br />
/dev/mapper/sil_aiageicechah1 <- First partition on this RAID Set<br />
<br />
If there is only one file ({{ic|/dev/mapper/control}}), check if your controller chipset module is loaded with {{Ic|lsmod}}. If it is, then dmraid does not support this controller or there are no RAID sets on the system (check RAID BIOS setup again). If correct, then you may be forced to use [[Installing with Software RAID or LVM|software RAID]] (this means no dual-booted RAID system on this controller).<br />
<br />
If your chipset module is NOT loaded, load it now. For example:<br />
<br />
# modprobe sata_sil<br />
<br />
See {{ic|/lib/modules/`uname -r`/kernel/drivers/ata/}} for available drivers.<br />
<br />
To test the RAID sets:<br />
<br />
# dmraid -tay<br />
<br />
== Perform traditional installation ==<br />
<br />
Switch to '''tty2''' and start the installer:<br />
<br />
# /arch/setup<br />
<br />
=== Partition the RAID set ===<br />
<br />
*Under '''Prepare Hard Drive''' choose '''Manually partition hard drives''' since the '''Auto-prepare''' option will '''not''' find your RAID sets.<br />
*Choose OTHER and type in your RAID set's full path (e.g. {{ic|/dev/mapper/sil_aiageicechah}}). Switch back to '''tty1''' to check your spelling.<br />
*Create the proper partitions the normal way.<br />
<br />
{{Tip|This would be a good time to install the "other" OS if planning to dual-boot. If installing Windows XP to "C:" then all partitions before the Windows partition should be changed to type [1B] (hidden FAT32) to hide them during the Windows installation. When this is done, change them back to type [83] (Linux). Of course, a reboot unfortunately requires some of the above steps to be repeated.}}<br />
<br />
=== Mounting the filesystem ===<br />
<br />
If -- and this is probably the case -- you do not find your newly created partitions under '''Manually configure block devices, filesystems and mountpoints''':<br />
<br />
*Switch back to '''tty1'''.<br />
<br />
*Deactivate all device-mapper nodes:<br />
# dmsetup remove_all<br />
<br />
*Reactivate the newly-created RAID nodes:<br />
# dmraid -ay<br />
# ls -la /dev/mapper<br />
<br />
*Switch to '''tty2''', re-enter the '''Manually configure block devices, filesystems and mountpoints''' menu and the partitions should be available.<br />
<br />
{{Warning|NEVER delete a partition in cfdisk to create 2 partitions with dmraid after '''Manually configure block devices, filesystems and mountpoints''' have been set. (really screws with dmraid metadata and existing partitions are worthless) Solution: delete the array from the bios and re-create to force creation under a new /dev/mapper ID, reinstall/repartition.}}<br />
<br />
=== Install and configure Arch ===<br />
<br />
{{Tip|Utilize three consoles: the setup GUI to configure the system, a chroot to install GRUB, and finally a cfdisk reference since RAID sets have weird names.<br />
<br />
* '''tty1:''' chroot and grub-install<br />
* '''tty2:''' /arch/setup<br />
* '''tty3:''' cfdisk for a reference in spelling, partition table and geometry of the RAID set<br />
<br />
Leave programs running and switch to when needed.}}<br />
<br />
Re-activate the installer ('''tty2''') and proceed as normal with the following exceptions:<br />
<br />
*Select Packages<br />
**Ensure '''dmraid''' is marked for installation<br />
<br />
*Configure System<br />
**Add '''dm_mod''' to the MODULES line in {{ic|mkinitcpio.conf}}. If using a mirrored (RAID 1) array, additionally add '''dm_mirror'''<br />
**Add '''chipset_module_driver''' to the MODULES line if necessary<br />
**Add '''dmraid''' to the HOOKS line in {{ic|mkinitcpio.conf}}; preferably after '''sata''' but before '''filesystems'''<br />
<br />
== install bootloader ==<br />
<br />
=== Use GRUB2 ===<br />
<br />
Please read [[GRUB2]] for more information about configuring GRUB2. Currently, the latest version of grub-bios is not compatiable with fake-raid. If you got an error like this when you run grub-install:<br />
<br />
$ grub-install /dev/mapper/sil_aiageicechah<br />
Path `/boot/grub` is not readable by GRUB on boot. Installation is impossible. Aborting.<br />
<br />
You could try an old version of grub. You could find old version package of grub at [http://arm.konnichi.com/search/ ARM Search]. Read [[Arch Rollback Machine]] for more information.<br />
<br />
1. download an old version package for grub<br />
i686:<br />
http://arm.konnichi.com/extra/os/i686/grub2-bios-1:1.99-6-i686.pkg.tar.xz<br />
http://arm.konnichi.com/extra/os/i686/grub2-common-1:1.99-6-i686.pkg.tar.xz<br />
x86_64:<br />
http://arm.konnichi.com/extra/os/x86_64/grub2-bios-1:1.99-6-x86_64.pkg.tar.xz<br />
http://arm.konnichi.com/extra/os/x86_64/grub2-common-1:1.99-6-x86_64.pkg.tar.xz<br />
<br />
You could verify these packages by the .sig file if you take care.<br />
<br />
2. install these old version packages by using "pacman -U *.pkg.tar.xz"<br />
<br />
3. (Optional) Install {{Pkg|os-prober}} if you have other OS like windows.<br />
<br />
4. $ grub-install /dev/mapper/sil_aiageicechah<br />
<br />
5. $ grub-mkconfig -o /boot/grub/grub.cfg<br />
<br />
6. (Optional) put grub2-bios, grub2-common in /etc/pacman.conf's IgnorePkg array, if you don't want pacman upgrade it.<br />
<br />
That's all, grub-mkconfig will generate the configure automatically. You could edit /etc/default/grub to modify the configure (timeout, color, etc) before grub-mkconfig.<br />
<br />
== Troubleshooting == <br />
<br />
=== Booting with degraded array ===<br />
<br />
One drawback of the fake RAID approach on GNU/Linux is that dmraid is currently unable to handle degraded arrays, and will refuse to activate. In this scenario, one must resolve the problem from within another OS (e.g. Windows) or via the BIOS/chipset RAID utility. <br />
<br />
Alternatively, if using a mirrored (RAID 1) array, users may temporarily bypass dmraid during the boot process and boot from a single drive:<br />
<br />
# Edit the '''kernel''' line from the [[GRUB]] menu<br />
## Remove references to dmraid devices (e.g. change {{ic|/dev/mapper/raidSet1}} to {{ic|/dev/sda1}})<br />
## Append {{Ic|<nowiki>disablehooks=dmraid</nowiki>}} to prevent a kernel panic when dmraid discovers the degraded array<br />
# Boot the system<br />
<br />
=== Error: Unable to determine major/minor number of root device ===<br />
<br />
If you experience a boot failure after kernel update where the boot process is unable to determine major/minor number of root device, this might just be a timing problem (i.e. dmraid -ay might be called before /dev/sd* is fully set up and detected). This can effect both the normal and LTS kernel images. Booting the 'Fallback' kernel image should work. The error will look something like this:<br />
<br />
Activating dmraid arrays...<br />
no block devices found<br />
Waiting 10 seconds for device /dev/mapper/nvidia_baaccajap5<br />
Root device '/dev/mapper/nvidia_baaccajap5' doesn't exist attempting to create it.<br />
Error: Unable to determine major/minor number of root device '/dev/mapper/nvidia_baaccajap5'<br />
<br />
To work around this problem:<br />
:* boot the Fallback kernel<br />
:* insert the 'sleep' hook in the HOOKS line of /etc/mkinitcpio.conf after the 'udev' hook like this:<br />
<br />
HOOKS="base udev sleep autodetect block dmraid filesystems"<br />
<br />
:* rebuild the kernel image and reboot<br />
<br />
=== dmraid mirror fails to activate ===<br />
<br />
Does everything above work correctly the first time, but then when you reboot dmraid cannot find the array?<br />
<br />
This is because Linux software raid (mdadm) has already attempted to mount the fakeraid array during system init and left it in an umountable state. To prevent mdadm from running, move the udev rule that is responsible out of the way:<br />
<br />
# cd /lib/udev/rules.d<br />
# mkdir disabled<br />
# mv 64-md-raid.rules disabled/<br />
# reboot</div>Timemasterhttps://wiki.archlinux.org/index.php?title=Steam&diff=247573Steam2013-02-16T23:26:17Z<p>Timemaster: </p>
<hr />
<div>[[Category:Gaming]]<br />
[[Category:Wine]]<br />
[[ja:Steam]]<br />
[[zh-CN:Steam]]<br />
{{Article summary start}}<br />
{{Article summary text|[http://store.steampowered.com/about/ Steam] is a content delivery system made by Valve Software. It is best known as the platform needed to play Source Engine games (e.g. Half-Life 2, Counter-Strike). Today it offers many games from many other developers.}}<br />
<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|Wine}}<br />
{{Article summary end}}<br />
<br />
See the [[Wikipedia:Steam (software)|Steam Wikipedia page]] and the page in the [http://appdb.winehq.org/objectManager.php?sClass=version&iId=19444 Wine Application Database] for more info.<br />
<br />
== Native Steam on Linux ==<br />
<br />
{{Note|Arch Linux is not [https://support.steampowered.com/kb_article.php?ref&#61;1504-QHXN-8366 officially supported].}}<br />
<br />
{{Note|If you have a pure 64-bit installation, you will need to enable the [[multilib]] repository in pacman. This is because the Steam client is a 32-bit application. It may also make sense to install multilib-devel to provide some important multilib libraries. You also most likely need to install the 32-bit version of your graphics driver to run Steam.}}<br />
<br />
Install {{Pkg|steam}} from the [[multilib]] repository.<br />
<br />
Steam makes heavy usage of the Arial font. A decent Arial font to use is {{Pkg|ttf-liberation}} or the official Microsoft Arial fonts: {{aur|ttf-microsoft-arial}} or {{aur|ttf-ms-fonts}} packages from the [[AUR]]. Asian languages require {{Pkg|wqy-zenhei}} to display properly.<br />
<br />
Steam is '''not supported''' on this distribution. As such some fixes are needed on the users part to get things functioning properly. Several games have dependencies which may be missing from your system. If a game fails to launch (often without error messages) then make sure all of the libraries listed below that game are installed. (Please help us expand on this list)<br />
<br />
===General troubleshooting===<br />
{{Note|In addition to being documented here, any bug/fix/error should be, if not already, reported on Valve's bug tracker on their [https://github.com/ValveSoftware/steam-for-linux GIT page].}}<br />
<br />
{{Note|Connection problems may occur when using DD-WRT with peer-to-peer traffic filtering.}}<br />
<br />
====GUI problems with KDE====<br />
If you are using KDE and you have problems with the GUI (such as lag or random crashes) modify the compositing type to OpenGL/Raster. So don't use the XRender!<br />
<br />
====The Close Button Only Minimizes the Window====<br />
If you have your tray icon working you have the option of making the close button close to tray instead of minimize. To do this simple set the environment variable <code>STEAM_FRAME_FORCE_CLOSE</code> to one. You can do this by launching steam using the following command.<br />
$ STEAM_FRAME_FORCE_CLOSE=1 steam<br />
For more information on this work around see [https://github.com/ValveSoftware/steam-for-linux/issues/1025 the related bug report] for details.<br />
<br />
====Flash not working on 64-bit systems====<br />
First ensure {{pkg|lib32-flashplugin}} is installed.<br />
<br />
Then create a local Steam flash plugin folder<br />
mkdir ~/.steam/bin32/plugins/<br />
and set a symbolic link to the global lib32 flash plugin file in your upper new folder<br />
ln -s /usr/lib32/mozilla/plugins/libflashplayer.so ~/.steam/bin32/plugins/<br />
<br />
====Mouse Cursor Overwritten ====<br />
{{Out of date|This issue is supposed to be fixed: https://github.com/ValveSoftware/steam-for-linux/issues/2 .}}<br />
Steam overwrites the [[Cursor Themes|X11 Cursor theme]] when it launches. This is a problem with Gnome and other WMs/DMs that do not set a cursor theme. You can overcome this for Gnome by setting a mouse cursor theme.<br />
<br />
To fix this issue, become root and put the following into {{ic|/usr/share/icons/default/index.theme}} (creating the directory {{ic|/usr/share/icons/default}} if necessary):<br />
{{hc|/usr/share/icons/default/index.theme|<nowiki><br />
[Icon Theme]<br />
Inherits=Adwaita<br />
</nowiki>}}<br />
<br />
Note: Instead of "Adwaita", you can choose another cursor theme (e.g. Human).<br />
Alternatively, you can install {{AUR|gnome-cursors-fix}} from the [[AUR]].<br />
<br />
Alternatively, you can create a symlink {{ic|~/.icons/default}} that points to the entry for your cursor in {{ic|/usr/share/icons}}, for example:<br />
<br />
mkdir -p ~/.icons<br />
ln -sT /usr/share/icons/Neutral_Plus ~/.icons/default<br />
<br />
If the cursor gets stuck pointing in the wrong direction after exiting Steam, a workaround is to run <code>xsetroot -cursor_name left_ptr</code> (From [http://awesome.naquadah.org/wiki/FAQ#How_to_change_the_cursor_theme.3F the awesomewm wiki]).<br />
<br />
If a cursor theme is set and Steam is still causing cursor problems then your cursor theme may not have the cursors under the names that Steam is looking at, for example KDE's cursor theme, Oxygen_White, the file for the cursor is 'left_ptr' but Steam is looking for 'arrow'. The simplest solution is to make symlinks to the appropriate cursor files.<br />
<br />
cd /usr/share/icons/Oxygen_White/cursors/<br />
ln -s ./left_ptr ./arrow<br />
ln -s ./size_fdiag ./bottom_right_corner<br />
ln -s ./size_fdiag ./top_left_corner<br />
ln -s ./size_bdiag ./bottom_left_corner<br />
ln -s ./size_bdiag ./top_right_corner<br />
ln -s ./size_hor ./left_side<br />
ln -s ./size_hor ./right_side<br />
ln -s ./size_ver ./top_side<br />
ln -s ./size_ver ./bottom_side<br />
<br />
The above should work for any KDE compatible cursor theme, but may not work for other ones if the cursor theme uses different names than size_fdiag, size_bdiag, size_hor, and size_ver.<br />
<br />
====Error: S3TC support is missing====<br />
=====Dependencies=====<br />
* {{pkg|libtxc_dxtn}}<br />
* {{pkg|lib32-libtxc_dxtn}}<br />
<br />
====Text is corrupt or missing====<br />
The Steam support [https://support.steampowered.com/kb_article.php?ref=1974-YFKL-4947 instructions] for Windows seem to work on Linux also: Simply download [https://support.steampowered.com/downloads/1974-YFKL-4947/SteamFonts.zip SteamFonts.zip] and install them (copying to ~/.fonts/ works at least).<br />
<br />
=== Game-specific depencencies and troubleshooting ===<br />
{{Note|Steam installs library dependencies of a game to a library directory, but some are missing at the moment. Report bugs involving missing libraries on Valve's bug tracker on their [https://github.com/ValveSoftware/steam-for-linux GIT page] before adding workarounds here, and then provide a link to the bug so it can be removed as the issues are fixed. Libraries like {{pkg|libglu}} and {{pkg|libtxn_dxtn}} are exceptions to this, as they are just part of the implementation of the open drivers.}}<br />
<br />
====Amnesia: The Dark Descent====<br />
=====Dependencies=====<br />
* {{pkg|lib32-freealut}}<br />
* {{pkg|lib32-glu}}<br />
* {{pkg|lib32-libxmu}}<br />
* {{pkg|lib32-sdl_ttf}}<br />
<br />
=====Troubleshooting=====<br />
======Segfault======<br />
If you are using open source drivers you will need {{pkg|lib32-libtxc_dxtn}} . See [http://www.frictionalgames.com/forum/thread-10924.html official forum] for details<br />
<br />
====And Yet It Moves====<br />
=====Dependencies=====<br />
* {{aur|lib32-libtheora}}<br />
* {{aur|lib32-libjpeg6}}<br />
* {{aur|lib32-libtiff4}}<br />
<br />
=====Compatibility=====<br />
Game refuses to launch and following message can be observed on console<br />
readlink: extra operand ‘Yet’<br />
Try 'readlink --help' for more information.<br />
To fix this, open {{ic|~/.steam/root/SteamApps/common/And Yet It Moves/AndYetItMovesSteam.sh}} in text editor and replace line<br />
ayim_dir="$(dirname "$(readlink -f ${BASH_SOURCE[0]})")"<br />
with<br />
<br />
ayim_dir="$(dirname "$(readlink -f "${BASH_SOURCE[0]}")")"<br />
<br />
====Bastion====<br />
=====Troubleshooting=====<br />
======Black Screen======<br />
If you get a black screen (with working sound and mouse cursor) in the game menu, you need to install {{pkg|lib32-libtxc_dxtn}}.<br />
# pacman -S lib32-libtxc_dxtn<br />
<br />
====FTL: Faster than Light====<br />
=====Dependencies=====<br />
Libraries are downloaded and and placed in the game's data directory for both architectures. As long as you run FTL by the launcher script (or via the shortcut in Steam) you should not need to download any further libraries.<br />
<br />
=====Compatibility=====<br />
After installation, FTL may fail to run due to a 'Text file busy' error (characterised in Steam by your portrait border going green then blue again). The easiest way to mend this is to just reboot your system. Upon logging back in FTL should run.<br />
<br />
The Steam overlay in FTL does not function as it is not a 3D accelerated game. Because of this the desktop notifications will be visible. If playing in fullscreen, therefore, these notifications in some systems may steal focus and revert you back to windowed mode with no way of going back to fullscreen without relaunching. The binaries for FTL on Steam have no DRM and it is possible to run the game ''without'' Steam running, so in some cases that may be optimum - just ensure that you launch FTL via the launcher script in {{ic|~/.steam/root/SteamApps/common/FTL Faster than Light/data/}} rather than the FTL binary in the $arch directory.<br />
<br />
=====Problems with open-source video driver=====<br />
FTL may fail to run if you are using an opensource driver for your video card. There are two solutions: install a proprietary video driver or delete (rename if you are unsure) the library "libstdc++.so.6" inside {{ic|~/.steam/root/SteamApps/common/FTL\ Faster\ Than\ Light/data/amd64/lib}} This is if you are using a 64bit system, I suppose that in case you are using a 32bit system you have to remove (rename) the same library located into {{ic|~/.steam/root/SteamApps/common/FTL\ Faster\ Than\ Light/data/x86/lib}}.<br />
<br />
====Harvest: Massive Encounter====<br />
=====Dependencies=====<br />
* {{pkg|gtk2}} or {{pkg|lib32-gtk2}}<br />
* {{pkg|libvorbis}} or {{pkg|lib32-libvorbis}}<br />
* {{pkg|nvidia-cg-toolkit}} or {{aur|lib32-nvidia-cg-toolkit}}<br />
* {{aur|libjpeg6}} or {{aur|lib32-libjpeg6}}<br />
<br />
=====Compatibility=====<br />
Game refuses to launch and throws you to library installer loop. Just edit {{ic| ~/.steam/root/SteamApps/common/Harvest Massive Encounter/run_harvest}} and remove everything but<br />
#!/bin/bash<br />
INSTDIR="`dirname $0`" ; cd "${INSTDIR}" ; INSTDIR="`pwd`"<br />
export LD_LIBRARY_PATH=${INSTDIR}/bin:~/.steam/bin<br />
exec ./Harvest<br />
<br />
====Killing Floor====<br />
=====Troubleshooting=====<br />
======Screen resolution======<br />
Killing Floor runs pretty much from scratch, although you might have to change in-game resolution screen as the default one is '''800x600''' and a '''4:3''' screen format.<br />
If you try to modify screen resolution in-game, it might crash your desktop enviroment.<br />
To fix this, please set the desired resolution screen size by modifing your {{ic|~/.killingfloor/System/KillingFloor.ini}} with your prefered editor.<br />
{{hc|# nano ~/.killingfloor/System/KillingFloor.ini|<nowiki><br />
...<br />
<br />
[WinDrv.WindowsClient]<br />
WindowedViewportX=????<br />
WindowedViewportY=????<br />
FullscreenViewportX=????<br />
FullscreenViewportY=????<br />
MenuViewportX=???<br />
MenuViewportY=???<br />
<br />
...<br />
<br />
[SDLDrv.SDLClient]<br />
WindowedViewportX=????<br />
WindowedViewportY=????<br />
FullscreenViewportX=????<br />
FullscreenViewportY=????<br />
MenuViewportX=????<br />
MenuViewportY=????<br />
<br />
...<br />
</nowiki>}}<br />
{{Note|Replace all the '''????''' with the corresponding numbers according the desired resolution. If you have an 1366x768 screen and want to use it at it's fullest, change all the Viewport fields to something like '''ViewportX&#61;1366''' and '''ViewportY&#61;768''' in the corresponding areas.}}<br />
{{Note| The dots in the middle indicate that there are more fields in that .ini file. But for screen resolution troubleshooting, you don't need to modify anything else.}}<br />
<br />
Save the file and restart the game, it should work now.<br />
<br />
======Windowed Mode======<br />
Uncheck fullscreen in the options menu, and use {{Keypress|Ctrl}} + {{Keypress|G}} to stop mouse capturing (that was non obvious to discover..). This way you can easily minimize it and do other other things..and let your WM handle things.<br />
<br />
====Penumbra: Overture====<br />
=====Dependencies=====<br />
(Taken from {{aur|penumbra-collection}} and {{aur|penumbra-overture-ep1-demo}})<br />
* {{pkg|lib32-openal}}<br />
* {{pkg|lib32-sdl_ttf}}<br />
* {{pkg|lib32-libvorbis}}<br />
* {{pkg|lib32-libxft}}<br />
* {{pkg|lib32-glu}}<br />
* {{pkg|lib32-sdl_image}}<br />
<br />
=====Troubleshooting=====<br />
======Windowed Mode======<br />
There is no in-game option to change to the windowed mode, you will have to edit {{ic|~/.frictionalgames/Penumbra/Overture/settings.cfg}} to activate it.<br />
Find {{ic|FullScreen&#61;"true"}} and change it to {{ic|FullScreen&#61;"false"}}, after this the game should start in windowed mode.<br />
<br />
====Psychonauts====<br />
=====Dependencies=====<br />
* {{pkg|lib32-libtxc_dxtn}}<br />
* {{pkg|libtxc_dxtn}}<br />
<br />
====Spacechem====<br />
=====Dependencies=====<br />
* {{pkg|lib32-sqlite}}<br />
* {{pkg|lib32-sdl_image}}<br />
* {{aur|lib32-sdl_mixer}}<br />
<br />
=====Troubleshooting=====<br />
======Game crash======<br />
The shipped x86 version of Spacechem doesn't work on x64 with the game's own libSDL* files, and crashes with some strange output.<br />
<br />
To solve this just remove or move the three files {{ic|libSDL-1.2.so.0}}, {{ic|libSDL_image-1.2.so.0}}, {{ic|libSDL_mixer-1.2.so.0}} from {{ic|~/.steam/root/SteamApps/common/SpaceChem}}<br />
<br />
====Space Pirates and Zombies====<br />
=====Troubleshooting=====<br />
======No audio======<br />
Apply the fix documented in Serious Sam 3: BFE below.<br />
<br />
====Splice====<br />
=====Dependencies=====<br />
* {{pkg|glu}}<br />
* {{pkg|mono}}<br />
<br />
Splice comes with both x86 and x64 binaries. Steam does not have to be running to launch this game.<br />
<br />
====Steel Storm: Burning Retribution====<br />
=====Troubleshooting=====<br />
======Start with black screen======<br />
The game tries to launch in 1024x768 resolution with fullscreen mode by default. It is impossible on some devices.<br />
(for example laptop Samsung Series9 with intel hd4000 video).<br />
<br />
You can launch the game in windowed mode. To do this open game Properties in Steam, in General tab select "Set launch options..." and type "-window".<br />
<br />
Now you can change the resolution in game.<br />
<br />
======No English fonts======<br />
If you use intel video card, just disable S3TC in DriConf.<br />
<br />
====Superbrothers: Sword & Sworcery EP====<br />
=====Dependencies=====<br />
* {{Pkg|lib32-glu}}<br />
<br />
====Team Fortress 2 ====<br />
=====Dependencies=====<br />
* {{pkg|lib32-libtxc_dxtn}}<br />
<br />
=====Troubleshooting=====<br />
======No audio======<br />
It happens if there is no PulseAudio in your system.<br />
If you want to use Alsa, you need to launch the Steam or game directly with {{ic|SDL_AUDIODRIVER&#61;alsa}} <br />
(From [http://steamcommunity.com/app/221410/discussions/0/882966056462819091/#c882966056470753683 SteamCommunity]).<br />
<br />
If it still doesn't work, you may also need to set the environment variable AUDIODEV. For instance {{ic|AUDIODEV&#61;Live}}. Use {{ic|aplay -l}} to list the available sound cards.<br />
<br />
====The Book of Unwritten Tales====<br />
If the game does not start, go to Properties --> Uncheck "Enable Steam Community In-Game".<br />
<br />
The game may segfault upon clicking the Setting menu and possibly during or before gameplay. This is a known issue and you will unfortunately have to wait for a fix from the developer.<br />
=====Dependencies=====<br />
* {{aur|lib32-libxaw}}<br />
* {{aur|lib32-jasper}}<br />
<br />
====The Clockwork Man====<br />
=====Dependencies=====<br />
* {{pkg|lib32-libidn}}<br />
<br />
====Trine 2====<br />
=====Dependencies=====<br />
* {{pkg|lib32-glu}}<br />
* {{pkg|lib32-libxxf86vm}}<br />
* {{pkg|lib32-libglapi}}<br />
* {{pkg|lib32-libdrm}}<br />
* {{pkg|lib32-openal}}<br />
<br />
<br />
=====Troubleshooting=====<br />
* If colors are wrong with FOSS drivers (r600g at least), try to run the game in windowed mode, rendering will be corrected. ([https://bugs.freedesktop.org/show_bug.cgi?id=60553 bugreport])<br />
<br />
====Counter-Strike: Source====<br />
=====Troubleshooting=====<br />
======Game crashes upon joining======<br />
If the game is constantly crashing when trying to join a game and in <code>~/.steam/root/SteamApps/your@account/Counter Strike Source/hl2.sh</code> you have <code>__GL_THREADED_OPTIMIZATIONS=1</code>, try changing it to 0.<br />
<br />
====Serious Sam 3: BFE====<br />
=====Dependencies=====<br />
* {{pkg|lib32-libtxc_dxtn}}<br />
<br />
=====Troubleshooting=====<br />
======No audio======<br />
Try running:<br />
# pacman -S lib32-alsa-plugins<br />
# mkdir -p /usr/lib/i386-linux-gnu/alsa-lib/<br />
# ln -s /usr/lib32/alsa-lib/libasound_module_pcm_pulse.so /usr/lib/i386-linux-gnu/alsa-lib/<br />
<br />
If that doesn't work, try tweaking ~/.alsoftrc as proposed by the [http://steamcommunity.com/app/221410/discussions/3/846940248238406974/ Steam community] (Serious Sam 3: BFE uses OpenAL to output sound). If you are not using Pulse Audio, you may want to write the following configuration:<br />
<br />
{{hc|$ nano ~/.alsoftrc|<nowiki><br />
[general]<br />
drivers = alsa<br />
[alsa]<br />
device = default<br />
capture = default<br />
mmap = true<br />
</nowiki>}}<br />
<br />
====Crusader Kings II====<br />
Game is installed into "$HOME/Steam/SteamApps/common/Crusader Kings II".<br />
<br />
Game can be started directly, without need of running Steam on background, using command "$HOME/Steam/SteamApps/common/Crusader Kings II/ck2".<br />
<br />
Saves are stored in "$HOME/Steam/SteamApps/common/Crusader Kings II/save games/".<br />
<br />
Apparently proprietary graphics drivers are required, otherwise most of the graphics won't be shown.<br />
=====Dependencies=====<br />
[[AMD Catalyst]] graphics (tested):<br />
* {{pkg|lib32-catalyst-utils}}<br />
* {{pkg|xf86-video-fbdev}}<br />
<br />
[[NVIDIA]] graphics (not tested):<br />
* {{pkg|lib32-nvidia-utils}}<br />
<br />
=====Troubleshooting=====<br />
======No audio======<br />
If the sound is missing in the game, try to start the game this way on the one line & jointly:<br />
SDL_AUDIODRIVER=alsa "$HOME/Steam/SteamApps/common/Crusader Kings II/ck2"<br />
or export the variable somewhere before in your system.<br />
export SDL_AUDIODRIVER=alsa<br />
<br />
===Skins for Steam===<br />
<br />
The Steam interface can be fully customized by copying its various interface files in its skins directory and modifying them.<br />
<br />
====Steam Skin Manager====<br />
<br />
The process of applying a skin to Steam can be greatly simplified using {{aur|steam-skin-manager}} from the AUR. The package also comes with a hacked version of the Steam launcher which allows the window manager to draw its borders on the Steam window.<br />
<br />
As a result, skins for Steam will come in two flavors, one with and one without window buttons. The skin manager will prompt you whether you use the hacked version or not, and will automatically apply the theme corresponding to your GTK theme if it is found. You can of course still apply another skin if you want.<br />
<br />
The package ships with two themes for the default Ubuntu themes, Ambiance and Radiance. A Faience theme is under development and already has its own package on the AUR {{aur|steam-skin-faience-git}}.<br />
<br />
== Steam on Wine ==<br />
<br />
Install {{Pkg|wine}} from the official repositories and follow the instructions provided in the [[Wine|article]].<br />
<br />
Install the required Microsoft fonts {{AUR|ttf-microsoft-tahoma}} and {{AUR|ttf-ms-fonts}} from the [[AUR]] or through {{AUR|winetricks-svn}}.<br />
{{Note|If you have access to Windows discs, you may want to install {{AUR|ttf-win7-fonts}} instead.}}<br />
<br />
If you have an old Wine prefix ({{ic|~/.wine}}), you should remove it and let Wine create a new one to avoid problems (you can transfer over anything you want to keep to the new Wine prefix).<br />
<br />
===Installation===<br />
<br />
Download and run the Steam installer from [http://store.steampowered.com/about/ steampowered.com]. It is no longer an {{ic|.exe}} file so you have to start it with {{ic|msiexec}}: <br />
$ msiexec /i SteamInstall.msi<br />
<br />
===Starting Steam===<br />
<br />
On x86:<br />
$ wine ~/.wine/drive_c/Program\ Files/Steam/Steam.exe<br />
<br />
On x86_64 (with steam installed to a clean wine prefix):<br />
$ wine ~/.wine/drive_c/Program\ Files\ \(x86\)/Steam/Steam.exe<br />
<br />
Alternatively, you may use this method:<br />
<br />
$ wine "C:\\Program Files\\Steam\\steam.exe" <br />
<br />
You should consider making an alias to easily start steam (and put it in your shell's rc file), example:<br />
alias steam='wine ~/.wine/drive_c/Program\ Files\ \(x86\)/Steam/Steam.exe >/dev/null 2>&1 &'<br />
<br />
{{Note|If you are using an nvidia card through bumblebee, you should prefix those commands with {{ic|optirun}}.}}<br />
<br />
===Tips===<br />
<br />
====Performance====<br />
<br />
Consider disabling wine debugging output by adding this to your shell rc file:<br />
export WINEDEBUG=-all<br />
or, just add it to your steam alias to only disable it for steam:<br />
alias steam='WINEDEBUG=-all wine ~/.wine/drive_c/Program\ Files\ \(x86\)/Steam/Steam.exe >/dev/null 2>&1 &'<br />
Additionally, Source games rely on a paged pool memory size specification for audio, and WINE by default does not have this set. To set it:<br />
wine reg add "HKLM\\System\\CurrentControlSet\\Control\\Session Manager\\Memory Management\\" /v PagedPoolSize /t REG_DWORD /d 402653184 /f<br />
<br />
==== Application Launch Options ====<br />
Go to "Properties" -> "Set Launch Options", e.g.:<br />
{{bc|-console -dxlevel 90 -width 1280 -height 1024<br />
}}<br />
* {{ic|console}}<br />
Activate the console in the application to change detailed applications settings.<br />
* {{ic|dxlevel}}<br />
Set the application's DirectX level, e.g. 90 for DirectX Version 9.0. It is recommended to use the video card's DirectX version to prevent crashes. See the official Valve Software Wiki http://developer.valvesoftware.com/wiki/DirectX_Versions for details.<br />
* {{ic|width}} and {{ic|height}}<br />
Set the screen resolution. In some cases the graphic settings are not saved in the application and the applications always starts in the default resolution.<br />
Please refer to http://developer.valvesoftware.com/wiki/Launch_options for a complete list of launch options.<br />
<br />
==== Using a Pre-Existing Steam Install ====<br />
<br />
If you have a shared drive with Windows, or already have a Steam installation somewhere else, you can simply symlink the Steam directory to {{ic|~/.wine/drive_c/Program Files/Steam/}} . However, be sure to do '''all''' the previous steps in this wiki. Confirm Steam launches and logs into your account, ''then'' do this:<br />
<br />
cd ~/.wine/drive_c/Program\ Files/ <br />
mv Steam/ Steam.backup/ (or you can just delete the directory)<br />
ln -s /mnt/windows_partition/Program\ Files/Steam/<br />
<br />
{{Note|If you have trouble starting Steam after symlinking the entire Steam folder, try linking only the {{ic|steamapps}} subdirectory in your existing wine steam folder instead.}}<br />
<br />
{{Note|If you still have trouble starting games, use {{ic|sudo mount --bind /path/to/SteamApps ~/.local/share/Steam/SteamApps -ouser&#61;your-user-name }}, this is the only thing that worked for me with {{ic|TF2}}}}<br />
<br />
====Running Steam in a second X Server====<br />
<br />
Assuming you are using the script above to start Steam, make a new script, called {{ic|x.steam.sh}}. You should run this when you want to start Steam in a new X server, and {{ic|steam.sh}} if you want Steam to start in the current X server. <br />
<br />
If due to misconfiguration a black screen is shown, you could always close down the second X server by pressing {{Keypress|Ctrl}} + {{Keypress|Alt}} + {{Keypress|Backspace}}.<br />
<br />
{{bc|1=<br />
#!/bin/bash <br />
<br />
DISPLAY=:1.0<br />
<br />
xinit $HOME/steam.sh $* -- :1<br />
}}<br />
<br />
Now you can use {{Keypress|Ctrl}} + {{Keypress|Alt}} + {{Keypress|F7}} to get to your first X server with your normal desktop, and {{Keypress|Ctrl}} + {{Keypress|Alt}} + {{Keypress|F8}} to go back to your game. <br />
<br />
Because the second X server is ''only'' running the game and the first X server with all your programs is backgrounded, performance should increase. In addition, it is much more convenient to switch X servers while in game to access other resources, rather than having to exit the game completely or {{Keypress|Alt}}-{{Keypress|Tab}} out. Finally, it is useful for when Steam or WINE goes haywire and leaves a bunch of processes in memory after Steam crashes. Simply {{Keypress|Ctrl}} + {{Keypress|Alt}} + {{Keypress|Backspace}} on the second X server to kill that X and all processes on that desktop will terminate as well. <br />
<br />
'''If you get errors that look like "Xlib: connection to ":1.0" refused by server" when starting the second X''': You will need to adjust your X permissions.<br />
<br />
'''If you lose the ability to use the keyboard while using Steam''': This is an odd bug that does not happen with other games. A solution is to use a WM in the second X as well. Thankfully, you do not need to run a large WM. Openbox and icewm have been confirmed to fix this bug (evilwm, pekwm, lwm ''do not'' work), but the icewm taskbar shows up on the bottom of the game, thus it's recommended to use [[Openbox]]. Install {{Pkg|openbox}} from the [[official repositories]], then add {{Ic|openbox &}} to the top of your {{ic|steam.sh}} file. Note you can run other programs (ex. Teamspeak &) or set X settings (ex. xset, xmodmap) before the WINE call as well.<br />
<br />
====Steam Links in Firefox, Chrome, Etc====<br />
To make steam:// urls in your browser connect with steam in wine, there are several things you can do. One involves making steam url-handler keys in gconf, another involves making protocol files for kde, others involve tinkering with desktop files or the Local State file for chromium. These seem to only work in firefox or under certain desktop configurations. One way to do it that works more globally is using mimeo, a tool made by Xyne (an Arch TU) which follows. For another working and less invasive (but firefox-only) way, see the first post [http://ubuntuforums.org/showthread.php?t=433548 here] .<br />
<br />
* Make {{ic| /usr/bin/steam}} with your favorite editor and paste:<br />
<br />
{{bc|<br />
#!/bin/sh<br />
#<br />
# Steam wrapper script<br />
#<br />
exec wine "c:\\program files\\steam\\steam.exe" "$@"<br />
}}<br />
<br />
* Make it executable.<br />
<br />
# chmod +x /usr/bin/steam<br />
<br />
* Install {{AUR|mimeo}} and {{AUR|xdg-utils-mimeo}} from AUR. You will need to replace the existing {{pkg|xdg-utils}} if installed. In XFCE, you will also need {{pkg|xorg-utils}}.<br />
<br />
* Create {{ic|~/.config/mimeo.conf}} with your favorite editor and paste:<br />
<br />
{{bc|<br />
/usr/bin/steam %u<br />
^steam://<br />
}}<br />
<br />
* Lastly, open {{ic|/usr/bin/xdg-open}} in your favorite editor. Go to the {{ic|detectDE()}} section and change it to look as follows:<br />
<br />
{{bc|<nowiki><br />
detectDE()<br />
{<br />
#if [ x"$KDE_FULL_SESSION" = x"true" ]; then DE=kde;<br />
#elif [ x"$GNOME_DESKTOP_SESSION_ID" != x"" ]; then DE=gnome;<br />
#elif `dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.GetNameOwner string:org.gnome.SessionManager > /dev/null 2>&1` ; then DE=gnome;<br />
#elif xprop -root _DT_SAVE_MODE 2> /dev/null | grep ' = \"xfce4\"$' >/dev/null 2>&1; then DE=xfce;<br />
#elif [ x"$DESKTOP_SESSION" == x"LXDE" ]; then DE=lxde;<br />
#else DE=""<br />
#fi<br />
DE=""<br />
}<br />
</nowiki>}}<br />
<br />
* Restart the browser and you should be good to go. In chromium, you cannot enter a {{ic|steam://}} link in the url box like you can with firefox. The forum link above has a {{ic|steam://open/friends}} link to try if needed.<br />
<br />
{{Note|If you have any problems with file associations after doing this, simply revert to regular xdg-utils and undo your changes to {{ic|/usr/bin/xdg-open}}.}}<br />
{{Note|Those on other distributions that stumble upon this page, see the link above for firefox specific instructions. No easy way to get it working on Chromium on other distros exists.}}<br />
<br />
====No text rendered problem====<br />
If there is no text/font rendered when starting steam you should try to start steam with the parameter {{ic|-no-dwrite}}. Read more in [https://bbs.archlinux.org/viewtopic.php?id=146223 the forum thread about it.]<br />
{{bc|wine ~/.wine/drive_c/Program\ Files\ \(x86\)/Steam/Steam.exe -no-dwrite}}<br />
<br />
== See Also ==<br />
* https://wiki.gentoo.org/wiki/Steam</div>Timemasterhttps://wiki.archlinux.org/index.php?title=Data-at-rest_encryption&diff=242764Data-at-rest encryption2013-01-02T23:37:28Z<p>Timemaster: /* Comparison table */</p>
<hr />
<div>[[Category:Security]]<br />
[[Category:File systems]]<br />
[[it:Disk Encryption]]<br />
{{Article summary start}}<br />
{{Article summary text|Transparent encryption/decryption software}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|dm-crypt with LUKS}}<br />
{{Article summary wiki|eCryptfs}}<br />
{{Article summary wiki|TrueCrypt}}<br />
{{Article summary wiki|EncFS}}<br />
{{Article summary wiki|Mount encrypted volumes in parallel}}<br />
{{Article summary end}}<br />
<br />
This article discusses common techniques available in Arch Linux for cryptographically protecting a logical part of a storage disk (folder, partition, whole disk, ...), so that all data that is written to it is automatically encrypted, and decrypted on-the-fly when read again.<br />
<br />
"Storage disks" in this context can be your computer's hard drive(s), external devices like USB sticks or DVD's, as well as ''virtual'' storage disks like loop-back devices or cloud storage ''(as long as Arch Linux can address it as a block device or filesystem)''.<br />
<br />
==Why use encryption?==<br />
<br />
Disk encryption ensures that files are always stored on disk in an encrypted form. The files only become available to the operating system and applications in readable form while the system is running and unlocked by a trusted user. Reading the encrypted sectors without permission will return garbled random-looking data instead of the actual files.<br />
<br />
For example, this can prevent unauthorized viewing of the data when the computer or hard-disk is:<br />
* located in a place to which non-trusted people might gain access while you're away<br />
* lost or stolen, as with laptops, netbooks or external storage devices<br />
* in the repair shop<br />
* discarded after its end-of-life<br />
<br />
In addition, disk encryption can also be used to add some security against unauthorized attempts to tamper with your operating system. For example, the installation of keyloggers or Trojan horses by attackers who can gain physical access to the system while you're away.<br />
<br />
{{Warning|Disk encryption does '''not''' protect your data from all threats.}}<br />
Including the following:<br />
* Attackers who can break into your system (e.g. over the Internet) while it is running and after you've already unlocked and mounted the encrypted parts of the disk.<br />
* Attackers who are able to gain physical access to the computer while (or very shortly after) it is running, and have the resources to perform a [[Wikipedia:Cold boot attack|cold boot attack]].<br />
* A government entity, which not only has the resources to easily pull off the above attacks, but also may simply force you to give up your keys/passphrases using various techniques of [[Wikipedia:Coercion|coercion]]. In most non-democratic countries around the world, as well as in the USA and UK, it is legal for law enforcement agencies to do so if they have suspicions that you might be hiding something of interest.<br />
<br />
A very strong disk encryption setup (e.g. full system encryption with no plaintext boot partition and authenticity checking) is required to stand a chance against professional attackers, who are able to tamper with your system before you use it. And even then it is doubtful whether it can really prevent all types of tampering (e.g. hardware keyloggers). The best remedy might be [[Wikipedia:Hardware-based full disk encryption|hardware-based full disk encryption]] (e.g. [[Wikipedia:Trusted_Computing|Trusted Computing]]).<br />
<br />
{{Warning|Disk encryption also won't protect you against someone simply [[Securely wipe disk|wiping your disk]]. [[Backup Programs|Regular backups]] are recommended to keep your data safe.}}<br />
<br />
===Data encryption vs system encryption===<br />
{{Wikipedia|Disk encryption}}<br />
'''Data encryption''', defined as encrypting only the user's data itself (often located within the {{ic|/home}} directory, or on removable media like a data DVD), is the simplest and least intrusive use of disk encryption, but has some significant drawbacks.<br>In modern computing systems, there are many background processes that may cache/store information about user data or parts of the data itself in non-encrypted areas of the hard drive, like:<br />
<br />
* swap partitions<br />
** <span style="color:#555">''(potential remedy: disable swapping)''</span><br />
* {{ic|/tmp}} (temporary files created by user applications)<br />
** <span style="color:#555">''(potential remedies: avoid such applications; mount {{ic|/tmp}} inside a [[ramdisk]])''</span><br />
* {{ic|/var}} (log files and databases and such; for example, mlocate stores an index of all file names in {{ic|/var/lib/mlocate/mlocate.db}})<br />
<br />
In addition, mere data encryption will leave the system vulnerable to offline system tampering attacks ''(see warnings above)''.<br />
<br />
<br />
'''System encryption''', defined as the encryption of the operating system ''and'' user data, helps to address some of the inadequacies of data encryption.<br />
<br />
Benefits:<br />
* Preventing unauthorized physical access to operating system files ''(but see warning above)''<br />
* Preventing unauthorized physical access to private data that may cached by the system. <br />
Disadvantages:<br />
* unlocking/locking of the encrypted parts of the disk can no longer coincide with user login/logout, because now the unlocking already needs to happen before or during boot<br />
<br />
<br />
In practice, there's not always a clear line between data encryption and system encryption, and many different compromises and customized setups are possible.<br />
<br />
In any case, disk encryption should only be viewed as an adjunct to the existing security mechanisms of the operating system - focused on securing offline physical access, while relying on ''other'' parts of the system to provide things like network security and user-based access control.<br />
<br />
==Available methods==<br />
<br />
All disk encryption methods operate in such a way that even though the disk actually holds encrypted data, the operating system and applications "see" it as the corresponding normal readable data as long as the cryptographic container (i.e. the logical part of the disk that holds the encrypted data) has been "unlocked" and mounted.<br />
<br />
For this to happen, some "secret information" (usually in the form of a keyfile and/or passphrase) needs to be supplied by the user, from which the actual encryption key can be derived (and stored in the kernel keyring for the duration of the session).<br />
<br />
If you are completely unfamiliar with this sort of operation, please first read the [[#How the encryption works]] section below.<br />
<br />
The available disk encryption methods can be separated into two types by their layer of operation:<br />
<br />
===Stacked filesystem encryption===<br />
<br />
Stacked filesystem encryption solutions are implemented as a layer that stacks on top of an existing filesystem, causing all files written to an encryption-enabled folder to be encrypted on-the-fly before the underlying filesystem writes them to disk, and decrypted whenever the filesystem reads them from disk. This way, the files are stored in the host filesystem in encrypted form (meaning that their contents, and usually also their file/folder names, are replaced by random-looking data of roughly the same length), but other than that they still exist in that filesystem as they would without encryption, as normal files / symlinks / hardlinks / etc.<br />
<br />
The way it is implemented, is that to unlock the folder storing the raw encrypted files in the host filesystem ("lower directory"), it is mounted (using a special stacked pseudo-filesystem) onto itself or optionally a different location ("upper directory"), where the same files then appear in readable form - until it is unmounted again, or the system is turned off.<br />
<br />
Available solutions in this category are:<br />
<br />
:;[[System_Encryption_with_eCryptfs|eCryptfs]]: ''...''<br />
<br />
:;[[EncFS]]: ''...''<br />
<br />
===Block device encryption===<br />
<br />
Block device encryption methods, on the other hand, operate ''below'' the filesystem layer and make sure that everything written to a certain block device (i.e. a whole disk, or a partition, or a file acting as a virtual loop-back device) is encrypted. This means that while the block device is offline, its whole content looks like a large blob of random data, with no way of determining what kind of filesystem and data it contains. Accessing the data happens, again, by mounting the protected container (in this case the block device) to an arbitrary location in a special way.<br />
<br />
The following "block device encryption" solutions are available in Arch Linux:<br />
<br />
:;[[loop-AES]]:''loop-AES is a descendant of cryptoloop and is a secure and fast solution to system encryption.''<br />
::''However loop-AES is considered less user-friendly than other options as it requires non-standard kernel support.''<br />
<br />
:;[[System_Encryption_with_LUKS|dm-crypt + LUKS]]: ''dm-crypt is the standard device-mapper encryption functionality provided by the Linux kernel. It can be used directly by those who like to have full control over all aspects of partition and key management.''<br />
::''LUKS is an additional convenience layer which stores all of the needed setup information for dm-crypt on the disk itself and abstracts partition and key management in an attempt to improve ease of use.''<br />
<br />
:;[[TrueCrypt]]: ''...''<br />
<br />
For practical implications of the chosen layer of operation, see the [[#practical_implications|comparison table]] below, as well as [http://ecryptfs.sourceforge.net/ecryptfs-faq.html#compare].<br />
<br />
===Comparison table===<br />
<br />
{| class="wikitable" style="text-align:center; cell-padding:100px; "<br />
|-<br />
| colspan="3" style="border-left-color:transparent; border-top-color:transparent; text-align:left;" |<br />
=====''summary''=====<br />
! scope="col" style="background:#E2E2E2" | Loop-AES<br />
! scope="col" style="background:#E2E2E2" | dm-crypt + LUKS<br />
! scope="col" style="background:#E2E2E2" | Truecrypt<br />
! scope="col" style="background:#E2E2E2" | eCryptfs<br />
! scope="col" style="background:#E2E2E2" | EncFs<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent; width:20px" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
type <br />
| colspan="3" | block device encryption<br />
| colspan="2" | stacked filesystem encryption<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
main selling points<br />
| longest-exiting one; possibly the fastest; works on legacy systems<br />
| de-facto standard for block device encryption on Linux; very flexible<br />
| very portable, well-polished, self-contained solution<br />
| slightly faster than EncFS; individual encrypted files portable between systems<br />
| easiest one to use; supports non-root administration<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
availability in Arch Linux<br />
| must manually compile custom kernel<br />
| ''kernel modules:'' already shipped with default kernel; ''tools:'' {{Pkg|device-mapper}}, {{Pkg|cryptsetup}} [core]<br />
| {{Pkg|truecrypt}} [extra]<br />
| ''kernel module:'' already shipped with default kernel; ''tools:'' {{Pkg|ecryptfs-utils}} [community]<br />
| {{Pkg|encfs}} [community]<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
license<br />
| GPL<br />
| GPL<br />
| custom<sup>[[#Notes_.26_References|[1]]]</sup><br />
| GPL<br />
| GPL<br />
|-<br />
| colspan="3" style="height:20px; border-color:transparent" |<br />
| colspan="5" style="border-right-color:transparent" |<br />
|-<br />
| colspan="3" style="border-left-color:transparent; border-top-color:transparent; text-align:left;" |<br />
=====''basic classification''=====<br />
! scope="col" style="background:#E2E2E2" | Loop-AES<br />
! scope="col" style="background:#E2E2E2" | dm-crypt + LUKS<br />
! scope="col" style="background:#E2E2E2" | Truecrypt<br />
! scope="col" style="background:#E2E2E2" | eCryptfs<br />
! scope="col" style="background:#E2E2E2" | EncFs<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
encrypts...<br />
| colspan="3" | whole block devices<br />
| colspan="2" | files<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
container for encrypted data may be...<br />
| colspan="3" |<br />
* a disk or disk partition<br />
* a file acting as a virtual partition<br />
| colspan="2" |<br />
* a directory in an existing file system<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
relation to filesystem<br />
| colspan="3" | operates below the filesystem layer - doesn't care whether the content of the encrypted block device is a filesystem, a partition table, a LVM setup, or anything else<br />
| colspan="2" | adds an additional layer to an existing filesystem, to automatically encrypt/decrypt files whenever they're written/read<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
encryption implemented in...<br />
| kernelspace<br />
| kernelspace<br />
| kernelspace<br />
| kernelspace<br />
| userspace<br>''(using FUSE)''<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
cryptographic metadata stored in...<br />
| ?<br />
| ?<br />
| ?<br />
| header of each encrypted file<br />
| control file at the top level of each EncFs container<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
wrapped encryption key stored in...<br />
| ?<br />
| ?<br />
| ?<br />
| key file that can be stored anywhere<br />
| control file at the top level of each EncFs container<br />
|-<br />
| colspan="3" style="height:20px; border-color:transparent" |<br />
| colspan="5" style="border-right-color:transparent" |<br />
|-<br />
| colspan="3" style="border-left-color:transparent; border-top-color:transparent; text-align:left;" |<br />
=====''practical implications''=====<br />
! scope="col" style="background:#E2E2E2" | Loop-AES<br />
! scope="col" style="background:#E2E2E2" | dm-crypt + LUKS<br />
! scope="col" style="background:#E2E2E2" | Truecrypt<br />
! scope="col" style="background:#E2E2E2" | eCryptfs<br />
! scope="col" style="background:#E2E2E2" | EncFs<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
file metadata (number of files, dir structure, file sizes, permissions, mtimes, etc.) is encrypted<br />
| colspan="3" | <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| colspan="2" | <span style="font-size:160%; color:#CF2525;">✖</span><br>''(file and dir names can be encrypted though)''<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
can be used to encrypt whole hard drives (including partition tables)<br />
| colspan="3" | <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| colspan="2" | <span style="font-size:160%; color:#CF2525;">✖</span><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
can be used to encrypt swap space<br />
| colspan="3" | <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| colspan="2" | <span style="font-size:160%; color:#CF2525;">✖</span><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
can be used without pre-allocating a fixed amount of space for the encrypted data container<br />
| colspan="3" | <span style="font-size:160%; color:#CF2525;">✖</span><br />
| colspan="2" | <span style="font-size:210%; color:#5F9E23;">✔</span><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
can be used to protect existing filesystems without block device access, e.g. NFS or Samba shares, cloud storage, etc.<br />
| colspan="3" | &nbsp;&nbsp;&nbsp;&nbsp;<span style="font-size:160%; color:#CF2525;">✖</span><sup>[[#Notes_.26_References|[2]]]</sup><br />
| colspan="2" | <span style="font-size:210%; color:#5F9E23;">✔</span><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
allows offline file-based backups of encrypted files<br />
| colspan="3" | <span style="font-size:160%; color:#CF2525;">✖</span><br />
| colspan="2" | <span style="font-size:210%; color:#5F9E23;">✔</span><br />
|-<br />
| colspan="3" style="height:20px; border-color:transparent" |<br />
| colspan="5" style="border-right-color:transparent" |<br />
|-<br />
| colspan="3" style="border-left-color:transparent; border-top-color:transparent; text-align:left;" |<br />
=====''usability features''=====<br />
! scope="col" style="background:#E2E2E2" | Loop-AES<br />
! scope="col" style="background:#E2E2E2" | dm-crypt + LUKS<br />
! scope="col" style="background:#E2E2E2" | Truecrypt<br />
! scope="col" style="background:#E2E2E2" | eCryptfs<br />
! scope="col" style="background:#E2E2E2" | EncFs<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
support for automounting on login<br />
| ?<br />
| ?<br />
| ?<br />
| ?<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
support for automatic unmounting in case of inactivity<br />
| ?<br />
| ?<br />
| ?<br />
| ?<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
non-root users can create/destroy containers for encrypted data<br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
provides a GUI<br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
|-<br />
| colspan="3" style="height:20px; border-color:transparent" |<br />
| colspan="5" style="border-right-color:transparent" |<br />
|-<br />
| colspan="3" style="border-left-color:transparent; border-top-color:transparent; text-align:left;" |<br />
=====''security features''=====<br />
! scope="col" style="background:#E2E2E2" | Loop-AES<br />
! scope="col" style="background:#E2E2E2" | dm-crypt + LUKS<br />
! scope="col" style="background:#E2E2E2" | Truecrypt<br />
! scope="col" style="background:#E2E2E2" | eCryptfs<br />
! scope="col" style="background:#E2E2E2" | EncFs<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
supported ciphers<br />
| AES<br />
| AES, Twofish, Serpent<br />
| ?<br />
| AES, blowfish, twofish...<br />
| AES, Twofish<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
support for salting<br />
| ?<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br>(with LUKS)<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| ?<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
support for cascading multiple ciphers<br />
| ?<br />
| ?<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| ?<br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
support for key-slot diffusion<br />
| ?<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br>(with LUKS)<br />
| ?<br />
| ?<br />
| ?<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
protection against key scrubbing<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| ?<br />
| ?<br />
| ?<br />
| ?<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
support for multiple (independently revokable) keys for the same encrypted data<br />
| ?<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br>(with LUKS)<br />
| ?<br />
| ?<br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
|- valign="top"<br />
| colspan="3" style="height:20px; border-color:transparent" |<br />
| colspan="5" style="border-right-color:transparent" |<br />
|-<br />
| colspan="3" style="border-left-color:transparent; border-top-color:transparent; text-align:left;" |<br />
<br />
=====''performance features''=====<br />
! scope="col" style="background:#E2E2E2" | Loop-AES<br />
! scope="col" style="background:#E2E2E2" | dm-crypt + LUKS<br />
! scope="col" style="background:#E2E2E2" | Truecrypt<br />
! scope="col" style="background:#E2E2E2" | eCryptfs<br />
! scope="col" style="background:#E2E2E2" | EncFs<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
multithreading support<br />
| ?<br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| ?<br />
| ?<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
hardware-accelerated encryption support<br />
| ?<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| ?<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
optimised handling of sparse files<br />
| ?<br />
| ?<br />
| ?<br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| ?<br />
|-<br />
| colspan="3" style="height:20px; border-color:transparent" |<br />
| colspan="3" style="border-right-color:transparent" |<br />
| colspan="2" style="border-color:transparent" |<br />
|-<br />
| colspan="3" style="border-left-color:transparent; border-top-color:transparent; text-align:left;" |<br />
=====''block device encryption specific''=====<br />
! scope="col" style="background:#E2E2E2" | Loop-AES<br />
! scope="col" style="background:#E2E2E2" | dm-crypt + LUKS<br />
! scope="col" style="background:#E2E2E2" | Truecrypt<br />
| colspan="2" rowspan="2" style="border-color:transparent" |<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
support for (manually) resizing the encrypted block device in-place<br />
| ?<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
|-<br />
| colspan="3" style="height:20px; border-color:transparent" |<br />
| colspan="3" style="border-color:transparent" |<br />
| colspan="2" style="border-right-color:transparent" |<br />
|-<br />
| colspan="6" style="border-left-color:transparent; text-align:left;" |<br />
=====''stacked filesystem encryption specific''=====<br />
! scope="col" style="background:#E2E2E2" | eCryptfs<br />
! scope="col" style="background:#E2E2E2" | EncFs<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="5" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
supported file systems<br />
| ext3, ext4, xfs (with caveats), jfs, nfs...<br />
| ?<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="5" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
ability to encrypt filenames<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="5" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
ability to ''not'' encrypt filenames<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
|-<br />
| colspan="3" style="height:20px; border-color:transparent" |<br />
| colspan="5" style="border-right-color:transparent" |<br />
|-<br />
| colspan="3" style="border-left-color:transparent; border-top-color:transparent; text-align:left;" |<br />
=====''compatibility & prevalence''=====<br />
! scope="col" style="background:#E2E2E2" | Loop-AES<br />
! scope="col" style="background:#E2E2E2" | dm-crypt + LUKS<br />
! scope="col" style="background:#E2E2E2" | Truecrypt<br />
! scope="col" style="background:#E2E2E2" | eCryptfs<br />
! scope="col" style="background:#E2E2E2" | EncFs<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
supported Linux kernel versions<br />
| 2.0 or newer<br />
| ?<br />
| ?<br />
| ?<br />
| 2.4 or newer<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" rowspan="3" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" | encrypted data can also be accessed from...<br />
! scope="row" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" | Windows<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span> (with <sup>[[#Notes_.26_References|[3]]]</sup>)<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span> (with <sup>[[#Notes_.26_References|[4]]]</sup>)<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| ?<br />
| ?<br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" | Mac OS X<br />
| ?<br />
| ?<br />
| <span style="font-size:210%; color:#5F9E23;">✔</span><br />
| ?<br />
| &nbsp;&nbsp;&nbsp;&nbsp;<span style="font-size:210%; color:#5F9E23;">✔</span><sup>[[#Notes_.26_References|[5]]]</sup><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" | FreeBSD<br />
| ?<br />
| ?<br />
| <span style="font-size:160%; color:#CF2525;">✖</span><br />
| ?<br />
| &nbsp;&nbsp;&nbsp;&nbsp;<span style="font-size:210%; color:#5F9E23;">✔</span><sup>[[#Notes_.26_References|[6]]]</sup><br />
|- valign="top"<br />
| style="border-left-color:transparent; border-bottom-color:transparent" |<br />
! scope="row" colspan="2" style="text-align:left; font-weight:normal; color:#393939; background:#E2E2E2; padding:0 6px" |<br />
used by<br />
| ?<br />
| <br />
* ''Arch Linux installer'' (system encryption)<br />
* ''Ubuntu alternate installer'' (system encryption)<br />
| ?<br />
|<br />
* ''Ubuntu installer'' (home dir encryption)<br />
* ''Chromium OS'' (encryption of cached user data<sup>[[#Notes_.26_References|[7]]]</sup>)<br />
| ?<br />
|}<br />
<br />
==Preparation==<br />
<br />
===Choosing a setup===<br />
<br />
Which disk encryption setup is appropriate for you will depend on your goals (please read [[#Why_use_encryption?]] above) and system parameters.<br><br />
Among other things, you will need to answer the following questions:<br />
<br />
<ul><br />
<li>What kind of "attacker" do you want to protect against?<br />
<ul style="list-style-type:circle;color:#777;font-size:90%;line-height:1em;margin-top:0"><br />
<li>Casual computer user trying to passively spy on your disk contents while your system is turned off / stolen / etc.</li><br />
<li>Professional cryptanalyst who can get repeated read/write access to your system before and after you use it</li><br />
<li>anything in between</li><br />
</ul></li><br />
</ul><br />
<br />
<ul><br />
<li>What encryption strategy shall be employed?<br />
<ul style="list-style-type:circle;color:#777;font-size:90%;line-height:1em;margin-top:0"><br />
<li>data encryption</li><br />
<li>system encryption</li><br />
<li>something in between</li><br />
</ul><ul><br />
<li>How should swap, {{ic|/tmp}}, etc. be taken care of?<br />
<ul style="list-style-type:circle;color:#777;font-size:90%;line-height:1em;margin-top:0"><br />
<li>ignore, and hope no data is leaked</li><br />
<li>disable or mount as ramdisk</li><br />
<li>encrypt ''(as part of full disk encryption, or separately)''</li><br />
</ul></li></ul></li><br />
</ul><br />
<br />
<ul><br />
<li>How should encrypted parts of the disk be unlocked?<br />
<ul style="list-style-type:circle;color:#777;font-size:90%;line-height:1em;margin-top:0"><br />
<li>passphrase ''(same as login password, or separate)''</li><br />
<li>keyfile ''(e.g. on a USB stick, that you keep in a safe place or carry around with yourself)''</li><br />
<li>both</li><br />
</ul></li><br />
</ul><br />
<br />
<ul><br />
<li>''When'' should encrypted parts of the disk be unlocked?<br />
<ul style="list-style-type:circle;color:#777;font-size:90%;line-height:1em;margin-top:0"><br />
<li>before boot</li><br />
<li>during boot</li><br />
<li>at login</li><br />
<li>manually on demand ''(after login)''</li><br />
</ul></li><br />
</ul><br />
<br />
<ul><br />
<li>How should multiple users be accomodated?<br />
<ul style="list-style-type:circle;color:#777;font-size:90%;line-height:1em;margin-top:0"><br />
<li>not at all</li><br />
<li>using a shared passphrase/key</li><br />
<li>idependently issued and revocable passphrases/keys for the same encrypted part of the disk</li><br />
<li>separate encrypted parts of the disk for different users</li><br />
</ul></li><br />
</ul><br />
<br />
Then you can go on to make the required technical choices (see [[#Available_methods]] above, and [[#How_the_encryption_works]] below), regarding:<br />
<br />
* stacked filesystem encryption vs. blockdevice encryption<br />
* key management<br />
* cipher and mode of operation<br />
* metadata storage<br />
* location of the "lower directory" (in case of stacked filesystem encryption)<br />
* ...<br />
<br />
In practice, it could turn out something like:<br />
<br />
<dl><dt style="font-weight:normal">'''''Example 1:''''' simple data encryption (internal hard drive)<br />
<dd><br />
&bull; '''a folder called "~/Private"''' in the user's home dir encrypted with ''EncFS''<br />
<span style="color:#777">'' ├──> encrypted versions of the files end up in ~/.Private''</span><br />
<span style="color:#777">'' └──> unlocked on demand with dedicated passphrase''</span><br />
</dl><br />
<br />
<dl><dt style="font-weight:normal">'''''Example 2:''''' simple data encryption (removable media)<br />
<dd><br />
&bull; '''whole external USB drive''' encrypted with ''TrueCrypt''<br />
<span style="color:#777">'' └──> unlocked when attached to the computer''</span><br />
<span style="color:#777">'' (using dedicated passphrase + using ~/photos/2006-09-04a.jpg as covert keyfile)''</span><br />
</dl><br />
<br />
<dl><dt style="font-weight:normal">'''''Example 3:''''' partial system encryption<br />
<dd><br />
&bull; each user's '''home directory''' encrypted with ''eCryptfs''<br />
<span style="color:#777">'' └──> unlocked on login, using login passphrase''</span><br />
&bull; '''swap''' and '''/tmp''' partitions encrypted with ''dm-crypt+LUKS''<br />
<span style="color:#777">'' └──> using automatically generated per-session throwaway key''</span><br />
&bull; indexing/caching of contents of /home by slocate (and similar apps) disabled<br />
</dl><br />
<br />
<dl><dt style="font-weight:normal">'''''Example 4:''''' system encryption<br />
<dd><br />
&bull; '''whole hard drive except /boot partition''' encrypted with ''dm-crypt+LUKS''<br />
<span style="color:#777">'' └──> unlocked during boot, using USB stick with keyfile (shared by all users)''</span><br />
</dl><br />
<br />
<dl><dt style="font-weight:normal">'''''Example 5:''''' paranoid system encryption<br />
<dd><br />
&bull; '''whole hard drive''' encrypted with ''dm-crypt+LUKS''<br />
<span style="color:#777">'' └──> unlocked before boot, using dedicated passphrase + USB stick with keyfile''</span><br />
<span style="color:#777">'' (different one issued to each user - independently revocable)''</span><br />
&bull; '''/boot partition''' located on aforementioned USB stick<br />
</dl><br />
<br />
Many many other combinations are of course possible. You should carefully plan what kind of setup will be appropriate for your system.<br />
<br />
===Choosing a strong passphrase===<br />
<br />
When relying on a passphrase, it must be complex enough to not be easy to guess or break using brute-force attacks. The tenets of strong passphrases are based on ''length'' and ''randomness''.<br><br />
Refer to [http://www.iusmentis.com/security/passphrasefaq/ The passphrase FAQ] for a detailed discussion, and especially consider the [http://world.std.com/~reinhold/diceware.html Diceware Passphrase] method.<br />
<br />
Another aspect of the strength of the passphrase is that it must not be easily recoverable from other places.<br />
If you use the same passphrase for disk encryption as you use for your login password (useful e.g. to auto-mount the encrypted partition or folder on login), make sure that {{ic|/etc/shadow}} either also ends up on an encrypted partition, or uses a strong hash algorithm (i.e. sha512/bcrypt, not md5) for the stored password hash (see [[SHA_password_hashes]] for more info).<br />
<br />
===Preparing the disk===<br />
<br />
Before setting up disk encryption on a (part of a) disk, consider securely wiping it first. This consists of overwriting the entire drive or partition with a stream of zero bytes or random bytes, and is done for one or both of the following reasons:<br />
<br />
<ul><br />
<li><p>'''prevent recovery of previously stored data'''</p><br />
Disk encryption doesn't change the fact that individual sectors are only overwritten on demand, when the file system creates or modifies the data those particular sectors hold (see [[#How_the_encryption_works]] below). Sectors which the filesystem considers "not currently used" are not touched, and may still contain remnants of data from previous filesystems. The only way to make sure that all data which you previously stored on the drive can not be [[Wikipedia:Data_recovery|recovered]], is to manually erase it.<br><br />
For this purpose it does not matter whether zero bytes or random bytes are used (although wiping with zero bytes will be much faster).<br />
</li><br />
<br />
<li><p>'''prevent disclosure of usage patterns on the encrypted drive'''</p><br />
Ideally, the whole encrypted part of the disk should be indistinguishable from uniformly random data. This way, no unauthorized person can know which and how many sectors actually contain encrypted data - which may be a desirable goal in itself (as part of true confidentiality), and also serves as an additional barrier against attackers trying to break the encryption.<br><br />
For this purpose, wiping the disk using high-quality random data is crucial.<br />
</li><br />
</ul><br />
<br />
The second reason only makes sense in combination with block device encryptions, because in the case of stacked filesystem encryption the encrypted data is easily identifiable anyways (in the form of distinct encrypted files in the host filesystem). Also note that even if you only intend to encrypt a particular folder, you will have to erase the whole partition if you want to get rid of files that were previously stored in that folder in unencrypted form. If there are other folders on the same partition, you will have to back them up and move them back afterwards.<br />
<br />
Once you have decided which kind of disk erasure you want to perform, refer to the [[Securely_wipe_disk]] article for technical instructions.<br />
<br />
{{Tip|In deciding which method to use for secure erasure of a hard disk drive, remember that this will not need to be performed more than once for as long as the drive is used as an encrypted drive.}}<br />
<br />
==How the encryption works==<br />
<br />
This section is intended as a high-level introduction to the concepts and processes which are at the heart of usual disk encryption setups.<br />
<br />
It does not go into technical or mathematical details (consult the appropriate literature for that), but should provide a system administrator with a rough understanding of how different setup choices (especially regarding key management) can affect usability and security.<br />
<br />
===Basic principle===<br />
<br />
For the purposes of disk encryption, each blockdevice (or individual file in the case of stacked filesystem encryption) is divided into '''sectors''' of equal lenght, for example 512 bytes (4,096 bits). The encryption/decryption then happens on a per-sector basis, so the n'th sector of the blockdevice/file on disk will store the encrypted version of the n'th sector of the original data.<br />
<br />
Whenever the operating system or an application requests a certain fragment of data from the blockdevice/file, the whole sector (or sectors) that contains the data will be read from disk, decrypted on-the-fly, and temporarily stored in memory:<br />
<br />
╔═══════╗<br />
sector 1 ║"???.."║<br />
╠═══════╣ ╭┈┈┈┈┈╮<br />
sector 2 ║"???.."║ ┊ key ┊<br />
╠═══════╣ ╰┈┈┬┈┈╯<br />
⁝ ⁝ │<br />
╠═══════╣ ▼ ┣┉┉┉┉┉┉┉┫<br />
sector n ║"???.."║━━━━━━━(decryption)━━━━━━▶┋"abc.."┋ sector n<br />
╠═══════╣ ┣┉┉┉┉┉┉┉┫<br />
⁝ ⁝<br />
╚═══════╝<br />
<br />
encrypted unencrypted<br />
blockdevice or data in memory<br />
file on disk<br />
<br />
Similarly, on each write operation, all sectors that are affected must be re-encrypted complelety (while the rest of the sectors remain untouched).<br />
<br />
===Keys, keyfiles and passphrases===<br />
<br />
In order to be able to de/encrypt data, the disk encryption system needs to know the unique secret "key" associated with it. This is a randomly generated byte string of a certain length, for example 32 bytes (256 bits).<br />
<br />
Whenever the encrypted block device or folder in question is to be mounted, its corresponding key (called henceforth its "master key") must be retrieved - usually from one of the following locations:<br />
<br />
<ul><br />
<li><p>'''''stored in a plaintext keyfile'''''</p><br />
<br />
Simply storing the master key in a file (in readable form) is the simplest option. The file - called a "keyfile" - can be placed on a USB stick that you keep in a secure location and only connect to the computer when you want to mount the encrypted parts of the disk (e.g. during boot or login).<br />
</li><br />
<br />
<li><p>'''''stored in passphrase-protected form in a keyfile or on the disk itself'''''</p><br />
<br />
The master key (and thus the encrypted data) can be protected with a secret passphrase, which you will have to remember and enter each time you want to mount the encrypted block device or folder.<br />
<br />
A common setup is to apply so-called "key stretching" to the passphrase (via a "key derivation function"), and use the resulting enhanced passphrase as the mount key for decrypting the actual master key (which has been previously stored in encrypted form):<br />
<br />
<pre><br />
╭┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╮ ╭┈┈┈┈┈┈┈┈┈┈┈╮<br />
┊ mount passphrase ┊━━━━━⎛key derivation⎞━━━▶┊ mount key ┊<br />
╰┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈╯ ,───⎝ function ⎠ ╰┈┈┈┈┈┬┈┈┈┈┈╯<br />
╭──────╮ ╱ │<br />
│ salt │───────────´ │<br />
╰──────╯ │<br />
╭─────────────────────╮ ▼ ╭┈┈┈┈┈┈┈┈┈┈┈┈╮<br />
│ encrypted master key│━━━━━━━━━━━━━━━━━━━━━━(decryption)━━━▶┊ master key ┊<br />
╰─────────────────────╯ ╰┈┈┈┈┈┈┈┈┈┈┈┈╯<br />
</pre><br />
<br />
The '''key derivation function''' (e.g. PBKDF2 or scrypt) is deliberately slow (it applies many iterations of a hash function, e.g. 1000 iterations of HMAC-SHA-512), so that brute-force attacks to find the passphrase are rendered infeasible. For the normal use-case of an authorized user, it will only need to be calculated once per session, so the small slowdown is not a problem.<br>It also takes an additional blob of data, the so-called "'''salt'''", as an argument - this is randomly generated once during set-up of the disk encryption and stored unprotected as part of the cryptographic metadata. Because it will be a different value for each setup, this makes it infeasible for attackers to speed up brute-force attacks using precomputed tables for the key derivation function.<br />
<br />
The '''encrypted master key''' can be stored on disk together with the encrypted data. This way, the confidentiality of the encrypted data depends completely on the secret passphrase.<br />
<br />
Additional security can be attained by instead storing the encrypted master key in a keyfile on e.g. a USB stick. This provides '''two-factor authentication''': Accessing the encrypted data now requires something only you ''know'' (the passphrase), and additionally something only you ''have'' (the keyfile).<br />
<br />
Another way of achieving two-factor authentication is to augment the above key retrieval scheme to mathematically "combine" the passphrase with byte data read from one or more external files (located on a USB stick or similar), before passing it to the key derivation function.<br>The files in question can be anything, e.g. normal JPEG images, which can be beneficial for [[#Plausible Deniability]]. They are still called "keyfiles" in this context, though.</li><br />
<br />
<li><p>'''''randomly generated on-the-fly for each session'''''</p><br />
<br />
In some cases, e.g. when encrypting swap space or a {{ic|/tmp}} partition, it is not necessary to keep a persistent master key at all. A new throwaway key can be randomly generated for each session, without requiring any user interaction. This means that once unmounted, all files written to the partition in question can never be decrypted again by ''anyone'' - which in those particular use-cases is perfectly fine.<br />
</li><br />
</ul><br />
<br />
<br>After is has been derived, the master key is securely stored in memory (e.g. in a kernel keyring), for as long as the encrypted block device or folder is mounted.<br />
<br />
It is usually not used for de/encrypting the disk data directly, though.<br />
For example, in the case of stacked filesystem encryption, each file can be automatically assigned its own encryption key. Whenever the file is to be read/modified, this file key first needs to be decrypted using the main key, before it can itself be used to de/encrypt the file contents:<br />
<br />
╭┈┈┈┈┈┈┈┈┈┈┈┈╮<br />
┊ master key ┊<br />
''file on disk:'' ╰┈┈┈┈┈┬┈┈┈┈┈┈╯<br />
┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┐ │<br />
╎╭───────────────────╮╎ ▼ ╭┈┈┈┈┈┈┈┈┈┈╮<br />
╎│ encrypted file key│━━━━(decryption)━━━▶┊ file key ┊<br />
╎╰───────────────────╯╎ ╰┈┈┈┈┬┈┈┈┈┈╯<br />
╎┌───────────────────┐╎ ▼ ┌┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┐<br />
╎│ encrypted file │◀━━━━━━━━━━━━━━━━━(de/encryption)━━━▶┊ readable file ┊<br />
╎│ contents │╎ ┊ contents ┊<br />
╎└───────────────────┘╎ └┈┈┈┈┈┈┈┈┈┈┈┈┈┈┈┘<br />
└ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘<br />
<br />
In a similar manner, a separate key (e.g. one per folder) may be used for the encryption of file names in the case of stacked filesystem encryption.<br />
<br />
In the case of block device encryption, ...<br />
{{Expansion|}}<br />
<br />
''Further reading:''<br />
* [[Wikipedia:Passphrase]]<br />
* [[Wikipedia:Key_(cryptography)]]<br />
* [[Wikipedia:Key_management]]<br />
* [[Wikipedia:Key_derivation_function]]<br />
<br />
===Ciphers and modes of operation===<br />
<br />
The actual algorithm used for translating between pieces of unencrypted and encrypted data (so-called "plaintext" and "ciphertext") which correspond to each other with respect to a given encryption key, is called a "'''cipher'''".<br />
<br />
Disk encryption employs "block ciphers", which operate on fixed-length blocks of data, e.g. 16 bytes (128 bits). At the time of this writing, the predominantly used ones are:<br />
{| class="wikitable" style="margin:0 5em 1.5em 5em;"<br />
! scope="col" style="text-align:left" | <br />
! scope="col" style="text-align:left" | block&nbsp;size<br />
! scope="col" style="text-align:left" | key&nbsp;size<br />
! scope="col" style="text-align:left" | comment<br />
|-<br />
! scope="row" style="text-align:right" | [[Wikipedia:Advanced_Encryption_Standard|AES]]<br />
| 128 bits<br />
| 128, 192 or 256 bits<br />
| ''approved by the NSA for protecting "SECRET" and "TOP SECRET" classified US-government information (when used with a key size of 192 or 256 bits)''<br />
|-<br />
! scope="row" style="text-align:right" | [[Wikipedia:Blowfish_%28cipher%29|Blowfish]]<br />
| 64 bits<br />
| 32–448 bits<br />
| ''one of the first patent-free secure ciphers that became publicly available, hence very well established on Linux''<br />
|-<br />
! scope="row" style="text-align:right" | [[Wikipedia:Twofish|Twofish]]<br />
| 128 bits<br />
| 128, 192 or 256 bits<br />
| ''developed as successor of Blowfish, but has not attained as much widespread usage''<br />
|}<br />
<br />
Encrypting/decrypting a sector ([[#Basic principle|see above]]) is achieved by dividing it into small blocks matching the cipher's block-size, and following a certain rule-set (a so-called "'''mode of operation'''") for how to consecutively apply the cipher to the individual blocks.<br />
<br />
Simply applying it to each block separately without modification (dubbed the "''electronic codebook (ECB)''" mode) would not be secure, because if the same 16 bytes of plaintext always produce the same 16 bytes of ciphertext, an attacker could easily recognize patterns in the ciphertext that is stored on disk.<br />
<br />
The most basic (and common) mode of operation used in practice is "''cipher-block chaining (CBC)''". When encrypting a sector with this mode, each block of plaintext data is combined in a mathematical way with the ciphertext of the previous block, before encrypting it using the cipher. For the first block, since it has no previous ciphertext to use, a special pre-generated data block stored with the sector's cryptographic metadata and called an "'''initialization vector (IV)'''" is used:<br />
<br />
╭──────────────╮<br />
│initialization│<br />
│vector │<br />
╰────────┬─────╯<br />
╭ ╠══════════╣ ╭─key │ ┣┉┉┉┉┉┉┉┉┉┉┫ <br />
│ ║ ║ ▼ ▼ ┋ ┋ . START<br />
┴ ║"????????"║◀━━━━(cipher)━━━━(+)━━━━━┋"Hello, W"┋ block ╱╰────┐<br />
sector n ║ ║ ┋ ┋ 1 ╲╭────┘<br />
of file or ║ ║──────────────────╮ ┋ ┋ ' <br />
blockdevice ╟──────────╢ ╭─key │ ┠┈┈┈┈┈┈┈┈┈┈┨<br />
┬ ║ ║ ▼ ▼ ┋ ┋<br />
│ ║"????????"║◀━━━━(cipher)━━━━(+)━━━━━┋"orld !!!"┋ block<br />
│ ║ ║ ┋ ┋ 2<br />
│ ║ ║──────────────────╮ ┋ ┋<br />
│ ╟──────────╢ │ ┠┈┈┈┈┈┈┈┈┈┈┨<br />
│ ║ ║ ▼ ┋ ┋<br />
⁝ ⁝ ... ⁝ ... ... ⁝ ... ⁝ ...<br />
<br />
ciphertext plaintext<br />
on disk in memory<br />
<br />
When decrypting, the procedure is reversed analogously.<br />
<br />
One thing worth noting is the generation of the unique initialization vector for each sector. The simplest choice is to calculate it in a predictable fashion from a readily available value such as the sector number. However, this might allow an attacker with repeated access to the system to perform a so-called [http://en.wikipedia.org/wiki/Watermarking_attack watermarking attack]. To prevent that, a method called "Encrypted salt-sector initialization vector ('''ESSIV''')" can be used to generate the initialization vectors in a way that makes them look completely random to a potential attacker.<br />
<br />
There are also a number of other, more complicated modes of operation available for disk encryption, which already provide built-in security agains such attacks.<br />
Some can also additionally guarantee authenticity ([[#Data integrity/authenticity|see below]]) of the encrypted data.<br />
<br />
''Further reading:''<br />
* [[Wikipedia:Disk_encryption_theory]]<br />
* [[Wikipedia:Block_cipher]]<br />
* [[Wikipedia:Block_cipher_modes_of_operation]]<br />
<br />
===Cryptographic metadata===<br />
<br />
{{Expansion|}}<br />
<br />
===Data integrity/authenticity===<br />
<br />
{{Expansion|}}<br />
<br />
''Further reading:''<br />
* [[Wikipedia:Authenticated_encryption]]<br />
<br />
===Plausible deniability===<br />
<br />
{{Expansion|}}<br />
<br />
==Notes & References==<br />
<small><br />
# [[#summary|^]] see http://www.truecrypt.org/legal/license<br />
# [[#practical_implications|^]] well, a single file in those filesystems could be used as a container (virtual loop-back device!) but then one wouldn't actually be using the filesystem (and the features it provides) anymore<br />
# [[#compatibility_.26_prevalence|^]] [http://www.scherrer.cc/crypt CrossCrypt] - Open Source AES and TwoFish Linux compatible on the fly encryption for Windows XP and Windows 2000<br />
# [[#compatibility_.26_prevalence|^]] [http://www.freeotfe.org FreeOTFE] - supports Windows 2000 and later (for PC), and Windows Mobile 2003 and later (for PDA)<br />
# [[#compatibility_.26_prevalence|^]] see [http://www.arg0.net/encfs-mac-build EncFs build instructions for Mac]<br />
# [[#compatibility_.26_prevalence|^]] see http://www.freshports.org/sysutils/fusefs-encfs/<br />
# [[#compatibility_.26_prevalence|^]] see http://www.chromium.org/chromium-os/chromiumos-design-docs/protecting-cached-user-data<br />
</small></div>Timemasterhttps://wiki.archlinux.org/index.php?title=ZFS&diff=239698ZFS2012-12-10T01:47:03Z<p>Timemaster: /* Add zfs to DAEMONS list */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Article summary start}}<br />
{{Article summary text|This page provides basic guidelines for installing the native ZFS Linux kernel module.}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|Installing Arch Linux on ZFS}}<br />
{{Article summary wiki|ZFS on FUSE}}<br />
{{Article summary end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], and a maximum [[Wikipedia:Exabyte|16 Exabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
==Installation==<br />
<br />
The ZFS kernel module is available in the [[AUR]] via {{aur|zfs}}.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators, therefore configuring ZFS is very straight forward. Configuration is done primarily with two commands, {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===mkinitramfs hook===<br />
<br />
If you are using ZFS on your root filesystem, then you will need to add the zfs hook to [[Mkinitcpio|mkinitcpio.conf]]. If you are not using ZFS for your root filesystem, then you do not need to add the ZFS hook.<br />
<br />
You will need to change your [[kernel parameters]] to include the dataset you want to boot. You can use <code>zfs=bootfs</code> to use the ZFS bootfs (set via <code>zpool set bootfs=rpool/ROOT/arch rpool</code>) or you can set the [[kernel parameters]] to <code>zfs=<pool>/<dataset></code> to boot directly from a ZFS dataset.<br />
<br />
To see all available options for the ZFS hook:<br />
<br />
$ mkinitcpio -H zfs<br />
<br />
To use the mkinitcpio hook, you will need to add <code>zfs</code> to your <code>HOOKS</code> in <code>/etc/mkinitcpio.conf</code>:<br />
<br />
{{hc|/etc/mkinitcpio.conf|<br />
...<br />
HOOKS<nowiki>="base udev autodetect pata scsi sata encrypt zfs filesystems"</nowiki><br />
...<br />
}}<br />
<br />
It is important to place this after any hooks which are needed to prepare the drive before it is mounted. For example, if your ZFS volume is encrypted, then you will need to place encrypt before the zfs hook to unlock it first.<br />
<br />
Recreate the ramdisk<br />
<br />
# mkinitcpio -p linux<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must bee loaded at startup. A benefit to this is that it is not necessary to mount your zpool in {{ic|/etc/fstab}}; the zfs daemon imports and mounts your zfs pool automatically.<br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time<br />
# systemctl enable zfs.service<br />
To manually start the daemon<br />
# systemctl start zfs.service<br />
<br />
==Initscripts==<br />
Add zfs to DAEMONS list<br />
<br />
{{hc|/etc/rc.conf|<br />
...<br />
DAEMONS<nowiki>=(... @syslog-ng zfs dbus ...)</nowiki><br />
...<br />
}}<br />
<br />
And now start the daemon if it is not started already<br />
<br />
# rc.d start zfs<br />
<br />
===Create a storage pool===<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary to partition your drives before creating the zfs filesystem, this will be done automatically. However, if you feel the need to completely wipe your drive before creating the filesystem, this can be easily done with the dd command.<br />
<br />
# dd if=/dev/zero of=/dev/<device><br />
<br />
It should not have to be stated, but be careful with this command!<br />
<br />
Once you have the list of drives, it is now time to get the id's of the drives you will be using. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's for your device, simply<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now finally, create the ZFS pool:<br />
<br />
# zpool create -m <mount> <pool> raidz <ids><br />
<br />
or as an example<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''pool''': This is the name of the pool. Change it to whatever you like.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than your pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partions that you want to include into your pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that you pool is mounted. Using {{ic|# zpool status}} will show that your pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot your computer to make sure your ZFS pool is mounted at boot. It is best to deal with all errors A.S.A.P. before transfering your data.<br />
<br />
== Usage ==<br />
<br />
To see all the commands available in ZFS, use<br />
<br />
$ man zfs<br />
<br />
or<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub your pool<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in your root crontab<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of your ZFS storage pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about your ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool.<br />
<br />
# zpool destroy <pool><br />
<br />
and now when checking the status<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of your pool, see [[#Check zfs pool status]].<br />
<br />
==Troubleshooting==<br />
<br />
=== does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding <code>spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
==Tips and tricks==<br />
<br />
==See also==<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]</div>Timemaster