https://wiki.archlinux.org/api.php?action=feedcontributions&user=Demizer&feedformat=atomArchWiki - User contributions [en]2024-03-28T15:18:14ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=ZFS&diff=432788ZFS2016-04-25T06:26:53Z<p>Demizer: Change demz-repo-core to archzfs</p>
<hr />
<div>[[Category:File systems]]<br />
[[ja:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. <br />
<br />
Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum 256 Quadrillion [[Wikipedia:Zettabyte|Zettabytes]] storage with no limit on number of filesystems (datasets) or files[http://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html]. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
=== General ===<br />
Install {{AUR|zfs-linux-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository. This package has {{AUR|zfs-utils-linux-git}} and {{AUR|spl-linux-git}} as a dependency, which in turn has {{AUR|spl-utils-linux-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-linux-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository. A script to build ZFS and its dependencies automatically can be found [[#Automated build script|here]].<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#archzfs|archzfs]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
{{Tip| You can [[downgrade]] your linux version to the one from [[Unofficial user repositories#archzfs|archzfs]] repo if your current kenel is newer.}}<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-git}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
=== DKMS ===<br />
<br />
{{Accuracy|This method was reported to not work correctly in January 2016, with pacman not triggering DKMS after a kernel upgrade or reinstalling.}}<br />
<br />
Users can make use of DKMS [[Dynamic Kernel Module Support]] to rebuild the ZFS modules automatically with every kernel upgrade. <br />
<br />
Read the [[Mkinitcpio]] wiki entry for a general understanding of the initial ramdisk environment, and adding the dkms hook [[Mkinitcpio#HOOKS]].<br />
<br />
Install {{AUR|zfs-dkms}} or {{AUR|zfs-dkms-git}} and apply the post-install instructions given by these packages.<br />
<br />
{{Tip|Add an {{ic|IgnorePkg}} entry to [[pacman.conf]] to prevent these packages from upgrading when doing a regular update.}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare_the_Devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, then the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced format disks ===<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
By default, ''zpool'' will enable all features on a pool. If {{ic|/boot}} resides on ZFS and when using [[GRUB]], you must only enable read-only, or non-read-only features supported by GRUB ({{ic|lz4_compress}} as of version 2.02.beta2). Otherwise GRUB will not be able to read the pool.<br />
<br />
# zpool create -f -d \<br />
-o feature@async_destroy=enabled \<br />
-o feature@empty_bpobj=enabled \<br />
-o feature@lz4_compress=enabled \<br />
-o feature@spacemap_histogram=enabled \<br />
-o feature@enabled_txg=enabled \<br />
<pool_name> <vdevs><br />
<br />
{{Tip|As of September 2015, GRUB's development tree supports {{ic|extensible_dataset}}, {{ic|hole_birth}}, {{ic|embedded_data}}, and {{ic|large_blocks}}, making it viable to use a pool with all features enabled, either at create time or by using {{ic|zpool upgrade <pool_name>}}, if {{AUR|grub-git}} is installed.}}<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PC's would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. ZFS supports a few different algorithms, presently lz4 is the default. '''gzip''' is also available for seldom-written yet highly-compressable data; consult the man page for more details. Enable compression using the zfs command:<br />
# zfs set compression=on <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZPool creation fails ===<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0x0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[archzfs]<br />
Server = http://archzfs.com/$repo/x86_64<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-linux}} group to the list of packages to be installed:<br />
<br />
{{hc|~/archlive/packages.both|<br />
...<br />
archzfs-git<br />
}}<br />
<br />
Complete [[Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#archzfs|archzfs]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-archiso-linux'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux but using the matching kernel modules directory name under the chroot's /lib/modules)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Bindmount ===<br />
Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.<br />
<br />
==== fstab ====<br />
See [http://www.freedesktop.org/software/systemd/man/systemd.mount.html systemd.mount] for more information on how systemd converts fstab into mount unit files with [http://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html systemd-fstab-generator].<br />
<br />
{{hc|/etc/fstab|<nowiki><br />
/mnt/zfspool /srv/nfs4/music none bind,defaults,nofail,x-systemd.requires=zfs-mount.service 0 0<br />
</nowiki>}}<br />
<br />
==== systemd mount unit ====<br />
<br />
If it is not possible to bindmount a directory residing on zfs onto another directory using fstab, because the fstab is read before the zfs pool is ready, you can overcome this limitation with a systemd mount unit can be used for the bind mount. The name of the mount unit must be equal to the directory mentioned after "Where", replace slashes with minuses. See [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdAndBindMounts]] and [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdBindMountUnits]] for more details.<br />
{{hc|srv-nfs4-music.mount|<nowiki><br />
[Mount]<br />
What=/mnt/zfspool<br />
Where=/srv/nfs4/music<br />
Type=none<br />
Options=bind<br />
<br />
[Unit]<br />
DefaultDependencies=no<br />
Conflicts=umount.target<br />
Before=local-fs.target umount.target<br />
After=zfs-mount.service<br />
Requires=zfs-mount.service<br />
ConditionPathIsDirectory=/mnt/zfspool<br />
<br />
[Install]<br />
WantedBy=local-fs.target<br />
</nowiki>}}<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=432787ZFS2016-04-25T06:25:26Z<p>Demizer: Chang demz-repo-core to archzfs</p>
<hr />
<div>[[Category:File systems]]<br />
[[ja:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. <br />
<br />
Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum 256 Quadrillion [[Wikipedia:Zettabyte|Zettabytes]] storage with no limit on number of filesystems (datasets) or files[http://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html]. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
=== General ===<br />
Install {{AUR|zfs-linux-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository. This package has {{AUR|zfs-utils-linux-git}} and {{AUR|spl-linux-git}} as a dependency, which in turn has {{AUR|spl-utils-linux-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-linux-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository. A script to build ZFS and its dependencies automatically can be found [[#Automated build script|here]].<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#archzfs|archzfs]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
{{Tip| You can [[downgrade]] your linux version to the one from [[Unofficial user repositories#archzfs|archzfs]] repo if your current kenel is newer.}}<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-git}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
=== DKMS ===<br />
<br />
{{Accuracy|This method was reported to not work correctly in January 2016, with pacman not triggering DKMS after a kernel upgrade or reinstalling.}}<br />
<br />
Users can make use of DKMS [[Dynamic Kernel Module Support]] to rebuild the ZFS modules automatically with every kernel upgrade. <br />
<br />
Read the [[Mkinitcpio]] wiki entry for a general understanding of the initial ramdisk environment, and adding the dkms hook [[Mkinitcpio#HOOKS]].<br />
<br />
Install {{AUR|zfs-dkms}} or {{AUR|zfs-dkms-git}} and apply the post-install instructions given by these packages.<br />
<br />
{{Tip|Add an {{ic|IgnorePkg}} entry to [[pacman.conf]] to prevent these packages from upgrading when doing a regular update.}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare_the_Devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, then the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced format disks ===<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
By default, ''zpool'' will enable all features on a pool. If {{ic|/boot}} resides on ZFS and when using [[GRUB]], you must only enable read-only, or non-read-only features supported by GRUB ({{ic|lz4_compress}} as of version 2.02.beta2). Otherwise GRUB will not be able to read the pool.<br />
<br />
# zpool create -f -d \<br />
-o feature@async_destroy=enabled \<br />
-o feature@empty_bpobj=enabled \<br />
-o feature@lz4_compress=enabled \<br />
-o feature@spacemap_histogram=enabled \<br />
-o feature@enabled_txg=enabled \<br />
<pool_name> <vdevs><br />
<br />
{{Tip|As of September 2015, GRUB's development tree supports {{ic|extensible_dataset}}, {{ic|hole_birth}}, {{ic|embedded_data}}, and {{ic|large_blocks}}, making it viable to use a pool with all features enabled, either at create time or by using {{ic|zpool upgrade <pool_name>}}, if {{AUR|grub-git}} is installed.}}<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PC's would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. ZFS supports a few different algorithms, presently lz4 is the default. '''gzip''' is also available for seldom-written yet highly-compressable data; consult the man page for more details. Enable compression using the zfs command:<br />
# zfs set compression=on <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZPool creation fails ===<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0x0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[archzfs]<br />
Server = http://archzfs.com/$repo/x86_64<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-linux}} group to the list of packages to be installed:<br />
<br />
{{hc|~/archlive/packages.both|<br />
...<br />
archzfs-git<br />
}}<br />
<br />
Complete [[Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#demz-repo-archiso|demz-repo-archiso]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-git'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux but using the matching kernel modules directory name under the chroot's /lib/modules)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Bindmount ===<br />
Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.<br />
<br />
==== fstab ====<br />
See [http://www.freedesktop.org/software/systemd/man/systemd.mount.html systemd.mount] for more information on how systemd converts fstab into mount unit files with [http://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html systemd-fstab-generator].<br />
<br />
{{hc|/etc/fstab|<nowiki><br />
/mnt/zfspool /srv/nfs4/music none bind,defaults,nofail,x-systemd.requires=zfs-mount.service 0 0<br />
</nowiki>}}<br />
<br />
==== systemd mount unit ====<br />
<br />
If it is not possible to bindmount a directory residing on zfs onto another directory using fstab, because the fstab is read before the zfs pool is ready, you can overcome this limitation with a systemd mount unit can be used for the bind mount. The name of the mount unit must be equal to the directory mentioned after "Where", replace slashes with minuses. See [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdAndBindMounts]] and [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdBindMountUnits]] for more details.<br />
{{hc|srv-nfs4-music.mount|<nowiki><br />
[Mount]<br />
What=/mnt/zfspool<br />
Where=/srv/nfs4/music<br />
Type=none<br />
Options=bind<br />
<br />
[Unit]<br />
DefaultDependencies=no<br />
Conflicts=umount.target<br />
Before=local-fs.target umount.target<br />
After=zfs-mount.service<br />
Requires=zfs-mount.service<br />
ConditionPathIsDirectory=/mnt/zfspool<br />
<br />
[Install]<br />
WantedBy=local-fs.target<br />
</nowiki>}}<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=432786ZFS2016-04-25T06:24:02Z<p>Demizer: Changed demz-repo-core to archzfs</p>
<hr />
<div>[[Category:File systems]]<br />
[[ja:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. <br />
<br />
Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum 256 Quadrillion [[Wikipedia:Zettabyte|Zettabytes]] storage with no limit on number of filesystems (datasets) or files[http://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html]. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
=== General ===<br />
Install {{AUR|zfs-linux-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository. This package has {{AUR|zfs-utils-linux-git}} and {{AUR|spl-linux-git}} as a dependency, which in turn has {{AUR|spl-utils-linux-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-linux-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository. A script to build ZFS and its dependencies automatically can be found [[#Automated build script|here]].<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#archzfs|archzfs]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
{{Tip| You can [[downgrade]] your linux version to the one from [[Unofficial user repositories#archzfs|archzfs]] repo if your current kenel is newer.}}<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-git}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
=== DKMS ===<br />
<br />
{{Accuracy|This method was reported to not work correctly in January 2016, with pacman not triggering DKMS after a kernel upgrade or reinstalling.}}<br />
<br />
Users can make use of DKMS [[Dynamic Kernel Module Support]] to rebuild the ZFS modules automatically with every kernel upgrade. <br />
<br />
Read the [[Mkinitcpio]] wiki entry for a general understanding of the initial ramdisk environment, and adding the dkms hook [[Mkinitcpio#HOOKS]].<br />
<br />
Install {{AUR|zfs-dkms}} or {{AUR|zfs-dkms-git}} and apply the post-install instructions given by these packages.<br />
<br />
{{Tip|Add an {{ic|IgnorePkg}} entry to [[pacman.conf]] to prevent these packages from upgrading when doing a regular update.}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare_the_Devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, then the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced format disks ===<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandleAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
By default, ''zpool'' will enable all features on a pool. If {{ic|/boot}} resides on ZFS and when using [[GRUB]], you must only enable read-only, or non-read-only features supported by GRUB ({{ic|lz4_compress}} as of version 2.02.beta2). Otherwise GRUB will not be able to read the pool.<br />
<br />
# zpool create -f -d \<br />
-o feature@async_destroy=enabled \<br />
-o feature@empty_bpobj=enabled \<br />
-o feature@lz4_compress=enabled \<br />
-o feature@spacemap_histogram=enabled \<br />
-o feature@enabled_txg=enabled \<br />
<pool_name> <vdevs><br />
<br />
{{Tip|As of September 2015, GRUB's development tree supports {{ic|extensible_dataset}}, {{ic|hole_birth}}, {{ic|embedded_data}}, and {{ic|large_blocks}}, making it viable to use a pool with all features enabled, either at create time or by using {{ic|zpool upgrade <pool_name>}}, if {{AUR|grub-git}} is installed.}}<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PC's would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. ZFS supports a few different algorithms, presently lz4 is the default. '''gzip''' is also available for seldom-written yet highly-compressable data; consult the man page for more details. Enable compression using the zfs command:<br />
# zfs set compression=on <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZPool creation fails ===<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0x0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[demz-repo-core]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-git}} group to the list of packages to be installed:<br />
<br />
{{hc|~/archlive/packages.both|<br />
...<br />
archzfs-git<br />
}}<br />
<br />
Complete [[Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#demz-repo-archiso|demz-repo-archiso]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-git'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux but using the matching kernel modules directory name under the chroot's /lib/modules)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Bindmount ===<br />
Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.<br />
<br />
==== fstab ====<br />
See [http://www.freedesktop.org/software/systemd/man/systemd.mount.html systemd.mount] for more information on how systemd converts fstab into mount unit files with [http://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html systemd-fstab-generator].<br />
<br />
{{hc|/etc/fstab|<nowiki><br />
/mnt/zfspool /srv/nfs4/music none bind,defaults,nofail,x-systemd.requires=zfs-mount.service 0 0<br />
</nowiki>}}<br />
<br />
==== systemd mount unit ====<br />
<br />
If it is not possible to bindmount a directory residing on zfs onto another directory using fstab, because the fstab is read before the zfs pool is ready, you can overcome this limitation with a systemd mount unit can be used for the bind mount. The name of the mount unit must be equal to the directory mentioned after "Where", replace slashes with minuses. See [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdAndBindMounts]] and [[http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdBindMountUnits]] for more details.<br />
{{hc|srv-nfs4-music.mount|<nowiki><br />
[Mount]<br />
What=/mnt/zfspool<br />
Where=/srv/nfs4/music<br />
Type=none<br />
Options=bind<br />
<br />
[Unit]<br />
DefaultDependencies=no<br />
Conflicts=umount.target<br />
Before=local-fs.target umount.target<br />
After=zfs-mount.service<br />
Requires=zfs-mount.service<br />
ConditionPathIsDirectory=/mnt/zfspool<br />
<br />
[Install]<br />
WantedBy=local-fs.target<br />
</nowiki>}}<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Unofficial_user_repositories&diff=432785Unofficial user repositories2016-04-25T06:22:40Z<p>Demizer: Adding archzfs repo</p>
<hr />
<div>[[Category:Package management]]<br />
[[ja:非公式ユーザーリポジトリ]]<br />
[[zh-CN:Unofficial user repositories]]<br />
{{Expansion|Please fill in the missing information about repository maintainers.}}<br />
<br />
{{Related articles start}}<br />
{{Related|pacman-key}}<br />
{{Related|Official repositories}}<br />
{{Related articles end}}<br />
<br />
This article lists binary repositories freely created and shared by the community, often providing pre-built versions of PKGBUILDS found in the [[AUR]].<br />
<br />
{{Warning|Neither the official Arch Linux Developers nor the Trusted Users perform tests of any sort to verify the contents of these repositories; it is up to each user to decide whether to trust their maintainers, and take full responsibility for whatever their decision brings.}}<br />
<br />
In order to use these repositories, you will have to add them to {{ic|/etc/pacman.conf}}, as explained in [[pacman#Repositories and mirrors]]. If a repository is signed, you will have to obtain and locally sign the associated key, as explained in [[Pacman-key#Adding unofficial keys]].<br />
<br />
If you want to create your own custom repository, follow [[pacman tips#Custom local repository]].<br />
<br />
{{Tip|To get a list of all servers listed in this page: {{bc|<nowiki>curl 'https://wiki.archlinux.org/index.php/Unofficial_user_repositories' | grep 'Server = ' | sed "s/\$arch/$(uname -m)/g" | cut -f 3 -d' '</nowiki>}}<br />
<br />
For your convenience you can, for example, open them all in a web browser to inspect the contents of their repositories.<br />
}}<br />
<br />
== Adding your repository to this page ==<br />
<br />
If you have your own repository, please add it to this page, so that all the other users will know where to find your packages. Please keep the following rules when adding new repositories:<br />
<br />
* Keep the lists in alphabetical order.<br />
* Include some information about the maintainer: include at least a (nick)name and some form of contact information (web site, email address, user page on ArchWiki or the forums, etc.).<br />
* If the repository is of the ''signed'' variety, please include a key-id, possibly using it as the anchor for a link to its keyserver; if the key is not on a keyserver, include a link to the key file.<br />
* Include some short description (e.g. the category of packages provided in the repository).<br />
* If there is a page (either on ArchWiki or external) containing more information about the repository, include a link to it.<br />
* If possible, avoid using comments in code blocks. The formatted description is much more readable. Users who want some comments in their {{ic|pacman.conf}} can easily create it on their own.<br />
<br />
== Any ==<br />
<br />
"Any" repositories are architecture-independent. In other words, they can be used on both i686 and x86_64 systems.<br />
<br />
=== Signed ===<br />
<br />
==== infinality-bundle-fonts ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle-fonts repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-fonts]<br />
Server = http://bohoomil.com/repo/fonts<br />
</nowiki>}}<br />
<br />
==== ivasilev ====<br />
<br />
* '''Maintainer:''' [http://ivasilev.net Ianis G. Vasilev]<br />
* '''Description:''' A variety of packages, mostly my own software and AUR builds.<br />
* '''Upstream page:''' http://ivasilev.net/pacman<br />
* '''Key-ID:''' 436BB513<br />
<br />
{{Note|I mantain 'any', 'i686' and 'x86_64' repos. Each of them includes packages from 'any'. $arch can be replaced with any of the three}}<br />
<br />
{{bc|<nowiki><br />
[ivasilev]<br />
Server = http://ivasilev.net/pacman/any<br />
# Server = http://ivasilev.net/pacman/$arch<br />
</nowiki>}}<br />
<br />
==== pkgbuilder ====<br />
<br />
* '''Maintainer:''' [https://chriswarrick.com/ Chris Warrick]<br />
* '''Description:''' A repository for PKGBUILDer, a Python AUR helper.<br />
* '''Upstream page:''' https://github.com/Kwpolska/pkgbuilder<br />
* '''Key-ID:''' 5EAAEA16<br />
<br />
{{bc|<nowiki><br />
[pkgbuilder]<br />
Server = https://pkgbuilder-repo.chriswarrick.com/<br />
</nowiki>}}<br />
<br />
==== xyne-any ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for "any" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{Note|Use this repository only if there is no matching {{ic|[xyne-*]}} repository for your architecture.}}<br />
<br />
{{bc|<nowiki><br />
[xyne-any]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== archlinuxgr-any ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' The Hellenic (Greek) unofficial Arch Linux repository with many interesting packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-any]<br />
Server = http://archlinuxgr.tiven.org/archlinux/any<br />
</nowiki>}}<br />
<br />
== Both i686 and x86_64 ==<br />
<br />
Repositories with both i686 and x86_64 versions. The {{ic|$arch}} variable will be set automatically by pacman.<br />
<br />
=== Signed ===<br />
<br />
==== arcanisrepo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#arcanis arcanis]<br />
* '''Description:''' A repository with some AUR packages including packages from VCS<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[arcanisrepo]<br />
Server = ftp://repo.arcanis.me/repo/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxcn ====<br />
<br />
* '''Maintainers:''' [https://plus.google.com/+PhoenixNemo/ Phoenix Nemo (phoenixlzx)], Felix Yan (felixonmars, TU), [https://twitter.com/lilydjwg lilydjwg], and others<br />
* '''Description:''' Packages by the Chinese Arch Linux community (mostly signed)<br />
* '''Git Repo:''' https://github.com/archlinuxcn/repo<br />
* '''Mirrors:''' https://github.com/archlinuxcn/mirrorlist-repo (Mostly for users in mainland China)<br />
* '''Key-ID:''' Once the repo is added, ''archlinuxcn-keyring'' package must be installed before any other so you don't get errors about PGP signatures.<br />
<br />
{{bc|<nowiki><br />
[archlinuxcn]<br />
SigLevel = Optional TrustedOnly<br />
Server = http://repo.archlinuxcn.org/$arch<br />
## or use a CDN (beta)<br />
#Server = https://cdn.repo.archlinuxcn.org/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0|Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst]<br />
Server = http://catalyst.wirephire.com/repo/catalyst/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst-hd234k ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0|Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst-hd234k]<br />
Server = http://catalyst.wirephire.com/repo/catalyst-hd234k/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst-hd234k/$arch<br />
</nowiki>}}<br />
<br />
==== city ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bgyorgy Balló György]<br />
* '''Description:''' Experimental/unpopular packages.<br />
* '''Upstream page:''' http://pkgbuild.com/~bgyorgy/city.html<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[city]<br />
Server = http://pkgbuild.com/~bgyorgy/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== haskell-core ====<br />
<br />
See [[ArchHaskell#haskell-core]].<br />
<br />
==== haskell-happstack ====<br />
<br />
See [[ArchHaskell#haskell-happstack]].<br />
<br />
==== haskell-web ====<br />
<br />
See [[ArchHaskell#haskell-web]].<br />
<br />
==== infinality-bundle ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle main repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle]<br />
Server = http://bohoomil.com/repo/$arch<br />
</nowiki>}}<br />
<br />
==== ivasilev ====<br />
<br />
* '''Maintainer:''' [http://ivasilev.net Ianis G. Vasilev]<br />
* '''Description:''' A variety of packages, mostly my own software and AUR builds.<br />
* '''Upstream page:''' http://ivasilev.net/pacman<br />
* '''Key-ID:''' 436BB513<br />
<br />
{{Note|I mantain 'any', 'i686' and 'x86_64' repos. Each of them includes packages from 'any'. $arch can be replaced with any of the three}}<br />
<br />
{{bc|<nowiki><br />
[ivasilev]<br />
Server = http://ivasilev.net/pacman/$arch<br />
</nowiki>}}<br />
<br />
==== llvm-svn ====<br />
<br />
* '''Maintainer:''' [[User:Kerberizer|Luchesar V. ILIEV (kerberizer)]]<br />
* '''Description:''' [https://aur.archlinux.org/pkgbase/llvm-svn llvm-svn] and [https://aur.archlinux.org/pkgbase/lib32-llvm-svn lib32-llvm-svn] from AUR: the LLVM compiler infrastructure, the Clang frontend, and the tools associated with it<br />
* '''Key-ID:''' [https://sks-keyservers.net/pks/lookup?op=vindex&search=0x76563F75679E4525&fingerprint=on&exact=on 0x76563F75679E4525], fingerprint <tt>D16C F22D 27D1 091A 841C 4BE9 7656 3F75 679E 4525</tt><br />
<br />
{{bc|<nowiki><br />
[llvm-svn]<br />
Server = http://repos.uni-plovdiv.net/archlinux/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== metalgamer ====<br />
<br />
* '''Maintainer:''' [http://metalgamer.eu/ metalgamer]<br />
* '''Description:''' Packages I use and/or maintain on the AUR.<br />
* '''Key ID:''' F55313FB<br />
<br />
{{bc|<nowiki><br />
[metalgamer]<br />
Server = http://repo.metalgamer.eu/$arch<br />
</nowiki>}}<br />
<br />
==== miffe ====<br />
<br />
* '''Maintainer:''' [https://bbs.archlinux.org/profile.php?id=4059 miffe]<br />
* '''Description:''' AUR packages maintained by miffe, e.g. linux-mainline<br />
* '''Key ID:''' 313F5ABD<br />
<br />
{{bc|<nowiki><br />
[miffe]<br />
Server = http://arch.miffe.org/$arch/<br />
</nowiki>}}<br />
<br />
==== pipelight ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Pipelight and wine-compholio<br />
* '''Upstream page:''' [http://fds-team.de/ fds-team.de]<br />
* '''Key-ID:''' E49CC0415DC2D5CA<br />
* '''Keyfile:''' http://repos.fds-team.de/Release.key<br />
<br />
{{bc|<nowiki><br />
[pipelight]<br />
Server = http://repos.fds-team.de/stable/arch/$arch<br />
</nowiki>}}<br />
<br />
==== repo-ck ====<br />
<br />
* '''Maintainer:''' [[User:Graysky|graysky]]<br />
* '''Description:''' Kernel and modules with Brain Fuck Scheduler and all the goodies in the ck1 patch set.<br />
* '''Upstream page:''' [http://repo-ck.com repo-ck.com]<br />
* '''Wiki:''' [[repo-ck]]<br />
* '''Key-ID:''' 5EE46C4C<br />
<br />
{{bc|<nowiki><br />
[repo-ck]<br />
Server = http://repo-ck.com/$arch<br />
</nowiki>}}<br />
<br />
==== seblu ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/developers/#seblu Sébastien Luttringer]<br />
* '''Description:''' All seblu useful pre-built packages, some homemade (virtualbox-ext-oracle, linux-seblu-meta, bedup).<br />
* '''Key-ID:''' Not required, as maintainer is a Developer<br />
<br />
{{bc|<nowiki><br />
[seblu]<br />
Server = http://seblu.net/a/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== seiichiro ====<br />
<br />
* '''Maintainer:''' [https://www.seiichiro0185.org Stefan Brand (seiichiro0185)]<br />
* '''Description:''' AUR-packages I use frequently<br />
* '''Key-ID:''' 805517CC<br />
<br />
{{bc|<nowiki><br />
[seiichiro]<br />
Server = http://www.seiichiro0185.org/repo/$arch<br />
</nowiki>}}<br />
<br />
==== sergej-repo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#spupykin Sergej Pupykin]<br />
* '''Description:''' psi-plus, owncloud-git, ziproxy, android, MySQL, and other stuff. Some packages also available for armv7h.<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[sergej-repo]<br />
Server = http://repo.p5n.pp.ru/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== tredaelli-systemd ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#tredaelli Timothy Redaelli]<br />
* '''Description:''' systemd rebuilt with unofficial OpenVZ patch (kernel < 2.6.32-042stab111.1)<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|{{ic|[tredaelli-systemd]}} must be put before {{ic|[core]}} in {{ic|/etc/pacman.conf}}}}<br />
<br />
{{bc|<nowiki><br />
[tredaelli-systemd]<br />
Server = http://pkgbuild.com/~tredaelli/repo/systemd/$arch<br />
</nowiki>}}<br />
<br />
==== herecura ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/people/trusted-users/#idevolder Ike Devolder]<br />
* '''Description:''' additional packages not found in the ''community'' repository<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[herecura]<br />
Server = http://repo.herecura.be/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== blackeagle-pre-community ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/people/trusted-users/#idevolder Ike Devolder]<br />
* '''Description:''' testing of the by me maintaned packages before moving to ''community'' repository<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[blackeagle-pre-community]<br />
Server = http://repo.herecura.be/$repo/$arch<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== arch-deepin ====<br />
<br />
* '''Maintainer:''' [https://build.opensuse.org/project/show/home:metakcahura metak], [https://github.com/fasheng fasheng]<br />
* '''Description:''' Porting software from Linux Deepin to Archlinux.<br />
* '''Upstream page:''' https://github.com/fasheng/arch-deepin<br />
<br />
{{bc|<nowiki><br />
[home_metakcahura_arch-deepin_Arch_Extra]<br />
SigLevel = Never<br />
Server = http://download.opensuse.org/repositories/home:/metakcahura:/arch-deepin/Arch_Extra/$arch<br />
#Server = http://anorien.csc.warwick.ac.uk/mirrors/download.opensuse.org/repositories/home:/metakcahura:/arch-deepin/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
==== archaudio ====<br />
<br />
* '''Maintainer:''' [[User:Schivmeister|Ray Rashif]], [https://aur.archlinux.org/account/jhernberg Joakim Hernberg]<br />
* '''Description:''' Pro-audio packages<br />
<br />
{{bc|<nowiki><br />
[archaudio-production]<br />
Server = http://repos.archaudio.org/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxfr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
* '''Upstream page:''' http://afur.archlinux.fr<br />
<br />
{{bc|<nowiki><br />
[archlinuxfr]<br />
Server = http://repo.archlinux.fr/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgis ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Maintainers needed - low bandwidth<br />
<br />
{{bc|<nowiki><br />
[archlinuxgis]<br />
Server = http://archlinuxgis.no-ip.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr]<br />
Server = http://archlinuxgr.tiven.org/archlinux/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr-kde4 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' KDE4 packages (plasmoids, themes etc) provided by the Hellenic (Greek) Arch Linux community<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-kde4]<br />
Server = http://archlinuxgr.tiven.org/archlinux-kde4/$arch<br />
</nowiki>}}<br />
<br />
==== arsch ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' From users of orgizm.net<br />
<br />
{{bc|<nowiki><br />
[arsch]<br />
Server = http://arsch.orgizm.net/$arch<br />
</nowiki>}}<br />
<br />
==== cinnamon ====<br />
<br />
* '''Maintainer:''' [https://github.com/jnbek jnbek]<br />
* '''Description:''' Stable and actively developed Cinnamon packages (Applets, Themes, Extensions), plus others (Hotot, qBitTorrent, GTK themes, Perl modules, and more).<br />
<br />
{{bc|<nowiki><br />
[cinnamon]<br />
Server = http://archlinux.zoelife4u.org/cinnamon/$arch<br />
</nowiki>}}<br />
<br />
==== ede ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Equinox Desktop Environment repository<br />
<br />
{{bc|<nowiki><br />
[ede]<br />
Server = http://ede.elderlinux.org/repos/archlinux/$arch<br />
</nowiki>}}<br />
<br />
==== heftig ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#heftig Jan Steffens]<br />
* '''Description:''' Includes linux-zen and aurora (Firefox development build - works alongside {{Pkg|firefox}} in the ''extra'' repository).<br />
* '''Upstream page:''' https://bbs.archlinux.org/viewtopic.php?id=117157<br />
<br />
{{bc|<nowiki><br />
[heftig]<br />
Server = http://pkgbuild.com/~heftig/repo/$arch<br />
</nowiki>}}<br />
<br />
==== home_Minerva_W_Science_Arch_Extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' [[OpenFOAM]] packages.<br />
<br />
{{bc|<nowiki><br />
[home_Minerva_W_Science_Arch_Extra]<br />
SigLevel = Never<br />
Server = http://download.opensuse.org/repositories/home:/Minerva_W:/Science/Arch_Extra/$arch <br />
</nowiki>}}<br />
<br />
==== mesa-git ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/people/trusted-users/#lcarlier Laurent Carlier]<br />
* '''Description:''' Mesa git builds for the ''testing'' and ''multilib-testing'' repositories<br />
<br />
{{bc|<nowiki><br />
[mesa-git]<br />
Server = http://pkgbuild.com/~lcarlier/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== noware ====<br />
<br />
* '''Maintainer:''' Alexandru Thirtheu (alex_giusi_tiri2@yahoo.com) ([https://bbs.archlinux.org/profile.php?id=65036 Forums]) ([[User:AGT|Wiki]]) ([http://direct.noware.systems.:2 Web Site])<br />
* '''Description:''' Software which I prefer being present in a repository, than being compiled each time. It eases software maintenance, I find. Almost anything goes.<br />
<br />
{{bc|<nowiki><br />
[noware]<br />
Server = http://direct.$repo.systems.:2/repository/arch/$arch<br />
</nowiki>}}<br />
<br />
==== oracle ====<br />
<br />
* '''Maintainer:''' [[User:Malvineous|Malvineous]]<br />
* '''Description:''' Oracle database client<br />
<br />
{{Warning|By adding this you are agreeing to the Oracle license at http://www.oracle.com/technetwork/licenses/instant-client-lic-152016.html}}<br />
<br />
{{bc|<nowiki><br />
[oracle]<br />
Server = http://linux.shikadi.net/arch/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== pantheon ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing Pantheon-related packages<br />
<br />
{{bc|<nowiki><br />
[pantheon]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== paulburton-fitbitd ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains fitbitd for synchronizing FitBit trackers<br />
<br />
{{bc|<nowiki><br />
[paulburton-fitbitd]<br />
Server = http://www.paulburton.eu/arch/fitbitd/$arch<br />
</nowiki>}}<br />
<br />
==== pietma ====<br />
<br />
* '''Maintainer:''' MartiMcFly <martimcfly@autorisation.de><br />
* '''Description:''' Arch User Repository packages [https://aur.archlinux.org/packages/?K=martimcfly&SeB=m I create or maintain.].<br />
* '''Upstream page:''' http://pietma.com/tag/aur/<br />
<br />
{{bc|<nowiki><br />
[pietma]<br />
SigLevel = Optional TrustAll<br />
Server = http://repository.pietma.com/nexus/content/repositories/archlinux/$arch/$repo<br />
</nowiki>}}<br />
<br />
==== pfkernel ====<br />
<br />
* '''Maintainer:''' [[User:Nous|nous]]<br />
* '''Description:''' Generic and optimized binaries of the ARCH kernel patched with BFS, TuxOnIce, BFQ, Aufs3; i.e. linux-pf[-cpu] and linux-pf-lts[-cpu]. Also, openrc and initscripts-openrc.<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/11734958/x86_64/index.html or start at http://bit.do/linux-pf<br />
<br />
{{bc|<nowiki><br />
[pfkernel]<br />
Server = http://dl.dropbox.com/u/11734958/$arch<br />
</nowiki>}}<br />
<br />
==== suckless ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' suckless.org packages<br />
<br />
{{bc|<nowiki><br />
[suckless]<br />
Server = http://dl.suckless.org/arch/$arch<br />
</nowiki>}}<br />
<br />
==== trinity ====<br />
<br />
* '''Maintainer:''' [[User:Mmanley|Michael Manley]]<br />
* '''Description:''' [[Trinity]] Desktop Environment<br />
<br />
{{bc|<nowiki><br />
[trinity]<br />
Server = http://repo.nasutek.com/arch/contrib/trinity/$arch<br />
</nowiki>}}<br />
<br />
==== Unity-for-Arch ====<br />
<br />
* '''Maintainer:''' https://github.com/chenxiaolong<br />
* '''Description:''' [[Unity]] packages for Arch<br />
<br />
{{bc|<nowiki><br />
[Unity-for-Arch]<br />
SigLevel = Optional TrustAll<br />
Server = http://dl.dropbox.com/u/486665/Repos/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== Unity-for-Arch-Extra ====<br />
<br />
* '''Maintainer:''' https://github.com/chenxiaolong<br />
* '''Description:''' [[Unity]] extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[Unity-for-Arch-Extra]<br />
SigLevel = Optional TrustAll<br />
Server = http://dl.dropbox.com/u/486665/Repos/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== home_tarakbumba_archlinux_Arch_Extra_standard ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains a few pre-built AUR packages (zemberek, etc.)<br />
<br />
{{bc|<nowiki><br />
[home_tarakbumba_archlinux_Arch_Extra_standard]<br />
Server = http://download.opensuse.org/repositories/home:/tarakbumba:/archlinux/Arch_Extra_standard/$arch<br />
</nowiki>}}<br />
<br />
==== QOwnNotes ====<br />
<br />
* '''Maintainer:''' http://www.qownnotes.org<br />
* '''Description:''' QOwnNotes is a open source notepad and todo list manager with markdown support and [[ownCloud]] integration.<br />
<br />
{{bc|<nowiki><br />
[home_pbek_QOwnNotes_Arch_Extra]<br />
SigLevel = Optional TrustAll<br />
Server = http://download.opensuse.org/repositories/home:/pbek:/QOwnNotes/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
== i686 only ==<br />
<br />
=== Signed ===<br />
<br />
==== eee-ck ====<br />
<br />
* '''Maintainer:''' Gruppenpest<br />
* '''Description:''' Kernel and modules optimized for Asus Eee PC 701, with -ck patchset.<br />
* '''Key-ID:''' 27D4A19A<br />
* '''Keyfile''' http://zembla.duckdns.org/repo/gruppenpest.gpg<br />
<br />
{{bc|<nowiki><br />
[eee-ck]<br />
Server = http://zembla.duckdns.org/repo<br />
</nowiki>}}<br />
<br />
==== phillid ====<br />
<br />
* '''Maintainer:''' Phillid<br />
* '''Description:''' Various GCC-s and matching binutils-es which target bare-bones formats (for OS dev). The GCC toolchains are shrunk to ~8&nbsp;MiB each by disabling NLS and everything but the C front-end. Thrown in there is some ham-related stuff I use such as hamlib, xastir, qsstv. Also a couple of legacy packages which are a bit lengthy to build for most people (kdelibs3, qt3).<br />
* '''Key-ID:''' 28F1E6CE<br />
<br />
{{bc|<nowiki><br />
[phillid]<br />
Server = http://phillid.tk/r/i686/<br />
</nowiki>}}<br />
<br />
==== xyne-i686 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "i686" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-i686]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' each program I'm using on x86_64 is compiled for i686 too<br />
* '''Upstream page:''' http://andrwe.org/linux/repository<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/i686<br />
</nowiki>}}<br />
<br />
==== kpiche ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable OpenSync packages.<br />
<br />
{{bc|<nowiki><br />
[kpiche]<br />
Server = http://kpiche.archlinux.ca/repo<br />
</nowiki>}}<br />
<br />
==== kernel26-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 2.6.39<br />
<br />
{{bc|<nowiki><br />
[kernel26-pae]<br />
Server = http://kernel26-pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== linux-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 3.0<br />
<br />
{{bc|<nowiki><br />
[linux-pae]<br />
Server = http://pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== rfad ====<br />
<br />
* '''Maintainer:''' requiem [at] archlinux.us<br />
* '''Description:''' Repository made by haxit<br />
<br />
{{bc|<nowiki><br />
[rfad]<br />
Server = http://web.ncf.ca/ey723/archlinux/repo/<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/i686<br />
</nowiki>}}<br />
<br />
== x86_64 only ==<br />
<br />
=== Signed ===<br />
<br />
==== apathism ====<br />
<br />
* '''Maintainer:''' Ivan Koryabkin ([https://aur.archlinux.org/account/apathism/ apathism])<br />
* '''Upstream page:''' https://apathism.net/<br />
* '''Description:''' Some AUR packages like {{AUR|psi-plus-git}} (with qt5 enabled).<br />
* '''Key-ID:''' 3E37398D<br />
* '''Keyfile:''' http://apathism.net/archlinux/apathism.key<br />
<br />
{{bc|<nowiki><br />
[apathism]<br />
Server = http://apathism.net/archlinux/<br />
</nowiki>}}<br />
<br />
==== archzfs ====<br />
<br />
* '''Maintainer:''' [http://archzfs.com Jesus Alvarez (demizer)]<br />
* '''Description:''' Packages for ZFS on Arch Linux.<br />
* '''Upstream page:''' https://github.com/archzfs/archzfs<br />
* '''Key-ID:''' 5E1ABF240EE7A126<br />
<br />
{{bc|<nowiki><br />
[archzfs]<br />
Server = http://archzfs.com/$repo/x86_64<br />
</nowiki>}}<br />
<br />
==== ashleyis ====<br />
<br />
* '''Maintainer:''' Ashley Towns ([https://aur.archlinux.org/account/ashleyis/ ashleyis])<br />
* '''Description:''' Debug versions of SDL, chipmunk, libtmx and other misc game libraries. also swift-lang and some other AUR packages <br />
* '''Key-ID:''' B1A4D311<br />
<br />
{{bc|<nowiki><br />
[ashleyis]<br />
Server = http://arch.ashleytowns.id.au/repo/$arch<br />
</nowiki>}}<br />
<br />
==== atom ====<br />
<br />
* '''Maintainer:''' Nicola Squartini ([https://github.com/tensor5 tensor5])<br />
* '''Upstream page:''' https://github.com/tensor5/arch-atom<br />
* '''Description:''' Atom text editor and Electron<br />
* '''Key-ID:''' B0544167<br />
<br />
{{bc|<nowiki><br />
[atom]<br />
Server = http://noaxiom.org/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== bioinformatics ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some software tools for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics]<br />
Server = http://decryptedepsilon.bl.ee/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== boyska64 ====<br />
<br />
* '''Maintainer:''' boyska<br />
* '''Description:''' Personal repository: cryptography, sdr, mail handling and misc<br />
* '''Key-ID:''' 0x7395DCAE58289CA9<br />
<br />
{{bc|<nowiki><br />
[boyska64]<br />
Server = http://boyska.degenerazione.xyz/archrepo<br />
</nowiki>}}<br />
<br />
==== coderkun-aur ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/coderkun/ coderkun]<br />
* '''Description:''' AUR packages with random software. Supporting package deltas and package and database signing.<br />
* '''Upstream page:''' https://www.coderkun.de/arch<br />
* '''Key-ID:''' A6BEE374<br />
* '''Keyfile:''' [https://www.coderkun.de/coderkun.asc https://www.coderkun.de/coderkun.asc]<br />
<br />
{{bc|<nowiki><br />
[coderkun-aur]<br />
Server = http://arch.coderkun.de/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== coderkun-aur-audio ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/coderkun/ coderkun]<br />
* '''Description:''' AUR packages with audio-related (realtime kernels, lv2-plugins, …) software. Supporting package deltas and package and database signing.<br />
* '''Upstream page:''' https://www.coderkun.de/arch<br />
* '''Key-ID:''' A6BEE374<br />
* '''Keyfile:''' [https://www.coderkun.de/coderkun.asc https://www.coderkun.de/coderkun.asc]<br />
<br />
{{bc|<nowiki><br />
[coderkun-aur-audio]<br />
Server = http://arch.coderkun.de/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== eatabrick ====<br />
<br />
* '''Maintainer:''' bentglasstube<br />
* '''Description:''' Packages for software written by (and a few just compiled by) bentglasstube.<br />
<br />
{{bc|<nowiki><br />
[eatabrick]<br />
SigLevel = Required<br />
Server = http://repo.eatabrick.org/$arch<br />
</nowiki>}}<br />
<br />
==== freifunk-rheinland ====<br />
<br />
* '''Maintainer:''' nomaster<br />
* '''Description:''' Packages for the Freifunk project: batman-adv, batctl, fastd and dependencies.<br />
<br />
{{bc|<nowiki><br />
[freifunk-rheinland]<br />
Server = http://mirror.fluxent.de/archlinux-custom/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== holo ====<br />
<br />
* '''Maintainer:''' Stefan Majewsky <holo-pacman@posteo.de> (please prefer to report issues at [https://github.com/majewsky/holo-pacman-repo/issues Github])<br />
* '''Description:''' Packages for [https://holocm.org Holo configuration management], including compatible plugins and tools.<br />
* '''Upstream page:''' https://github.com/majewsky/holo-pacman-repo<br />
* '''Package list:''' https://repo.holocm.org/archlinux/x86_64<br />
* '''Key-ID:''' 0xF7A9C9DC4631BD1A<br />
<br />
{{bc|<nowiki><br />
[holo]<br />
Server = https://repo.holocm.org/archlinux/x86_64<br />
</nowiki>}}<br />
<br />
==== Linux-pf ====<br />
<br />
{{Accuracy|Signed repositories should not use {{ic|1=SigLevel = Optional}} (by definition).}}<br />
<br />
* '''Maintainer:''' [[User:Thaodan|Thaodan]]<br />
* '''Description:''' Generic and optimized binaries of the ARCH kernel patched with BFS, TuxOnIce, BFQ, Aufs3; i.e. linux-pf, just like {{AUR|linux-pf}} from the [[AUR]] but additionally optimized for intel CPUs Sandy Bridge, Ivy Bridge, Haswell and generic of course, and some extra packages<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox).<br />
<br />
{{bc|<nowiki><br />
[Linux-pf]<br />
Server = https://dl.dropboxusercontent.com/u/172590784/Linux-pf/x86_64/<br />
SigLevel = Optional<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-multilib ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle multilib repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-multilib]<br />
Server = http://bohoomil.com/repo/multilib/$arch<br />
</nowiki>}}<br />
<br />
==== kc9ydn ====<br />
<br />
* '''Maintainer:''' [http://kc9ydn.us KC9YDN]<br />
* '''Description:''' Consists mostly of amateur radio related apps<br />
* '''Key-ID:''' 7DA25A0F<br />
<br />
{{bc|<nowiki><br />
[kc9ydn]<br />
Server = http://kc9ydn.us/repo/<br />
</nowiki>}}<br />
<br />
==== linux-lts-ck ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Current ArchLinux LTS kernel with the CK patch<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-lts-ck/x86_64/index.html or start at http://tiny.cc/linux-lts-ck<br />
<br />
{{bc|<nowiki><br />
[linux-lts-ck]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-lts-ck/$arch<br />
</nowiki>}}<br />
<br />
==== linux-lts31x ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Older LTS kernels (3.10 and 3.12 branch)<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-lts31x/x86_64/index.html or start at http://tiny.cc/linux-lts31x<br />
<br />
{{bc|<nowiki><br />
[linux-lts31x]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-lts31x/$arch<br />
</nowiki>}}<br />
<br />
==== linux-lts31x-ck ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Older LTS kernels (3.10 and 3.12 branch) with the CK patch<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-lts31x-ck/x86_64/index.html or start at http://tiny.cc/linux-lts31x-ck<br />
<br />
{{bc|<nowiki><br />
[linux-lts31x-ck]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-lts31x-ck/$arch<br />
</nowiki>}}<br />
<br />
==== linux-ck-pax ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Current Arch Kernel with the CK and PaX security patchsets<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-ck-pax/x86_64/index.html or start at http://tiny.cc/linux-ck-pax<br />
<br />
{{bc|<nowiki><br />
[linux-ck-pax]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-ck-pax/$arch<br />
</nowiki>}}<br />
<br />
==== linux-kalterfx ====<br />
<br />
* '''Maintainer''': Anna Ivanova ([https://aur.archlinux.org/account/kalterfive kalterfive])<br />
* '''Upstream page''': https://kalterfive.github.io/linux-kalterfx/about.html<br />
* '''Description''': A custom kernel with applied pf patchset and compiled fs/reiser4.<br />
* '''Key-ID''': A0C04F15<br />
* '''Keyfile''': https://keybase.io/kalterfive/key.asc<br />
<br />
{{bc|<nowiki><br />
[linux-kalterfx]<br />
Server = http://deadsoftware.ru/files/linux-kalterfx/repo/$arch<br />
</nowiki>}}<br />
<br />
==== linux-tresor ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Arch Current and LTS kernels with TRESOR<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-tresor/x86_64/index.html or start at http://tiny.cc/linux-tresor<br />
<br />
{{bc|<nowiki><br />
[linux-tresor]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-tresor/$arch<br />
</nowiki>}}<br />
<br />
==== nullptr_t ====<br />
<br />
* '''Maintainers:''' nullptr_t, <br />
* '''Description:''' Cherry-picked non-properitary packages and admin tools from AUR (e.g. [[plymouth]], nemo-extensions and a few more)<br />
* '''Comment:''' Down until packaging is explicitly allowed by all GPL Licenses.<br />
* '''Key-ID:''' B4767A17CEC5B4E9<br />
<br />
{{bc|<nowiki><br />
[nullptr_t]<br />
Server = https://archlinux.0ptr.de/mirrors/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== markzz ====<br />
<br />
* '''Maintainer:''' [[User:Markzz|Mark Weiman (markzz)]]<br />
* '''Description:''' Packages that markzz maintains or uses on the AUR; this includes Linux with the vfio patchset ({{AUR|linux-vfio}} and {{AUR|linux-vfio-lts}}), and packages to maintain a Debian package repository.<br />
* '''Sources:''' http://git.markzz.net/markzz/repositories/markzz.git/tree<br />
* '''Key ID:''' 3CADDFDD<br />
<br />
{{Note|If you want to add the key by installing the ''markzz-keyring'' package, temporarily add {{ic|1=SigLevel = Never}} into the repository section.}}<br />
<br />
{{bc|<nowiki><br />
[markzz]<br />
Server = http://repo.markzz.com/arch/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== qt-debug ====<br />
<br />
* '''Maintainer:''' [http://blog.the-compiler.org/?page_id=36 The Compiler]<br />
* '''Description:''' Qt/PyQt builds with debug symbols<br />
* '''Upstream page:''' https://github.com/The-Compiler/qt-debug-pkgbuild<br />
* '''Key-ID:''' D6A1C70FE80A0C82<br />
<br />
{{bc|<nowiki><br />
[qt-debug]<br />
Server = http://qutebrowser.org/qt-debug/$arch<br />
</nowiki>}}<br />
<br />
==== quarry ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/developers/#anatolik anatolik]<br />
* '''Description:''' Arch binary repository for [http://rubygems.org/ Rubygems] packages. See [https://bbs.archlinux.org/viewtopic.php?id=182729 forum announcement] for more information.<br />
* '''Sources:''' https://github.com/anatol/quarry<br />
* '''Key-ID:''' Not needed, as maintainer is a developer<br />
<br />
{{bc|<nowiki><br />
[quarry]<br />
Server = http://pkgbuild.com/~anatolik/quarry/x86_64/<br />
</nowiki>}}<br />
<br />
==== rstudio ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/unikum/ Artem Klevtsov]<br />
* '''Description:''' Rstudio IDE package (git version) and depends.<br />
* '''Key-ID:''' 1CB48DD4<br />
<br />
{{bc|<nowiki><br />
[rstudio]<br />
Server = http://repo.psylab.info/archlinux/x86_64/<br />
</nowiki>}}<br />
<br />
==== siosm-aur ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages also available in the Arch User Repository, sometimes with minor fixes<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-aur]<br />
Server = http://siosm.fr/repo/$repo/<br />
</nowiki>}}<br />
<br />
==== subtitlecomposer ====<br />
<br />
* '''Maintainer:''' Mladen Milinkovic (maxrd2)<br />
* '''Description:''' Subtitle Composer stable and nightly builds<br />
* '''Upstream page:''' https://github.com/maxrd2/subtitlecomposer<br />
* '''Key-ID:''' EF9D9B26<br />
<br />
{{bc|<nowiki><br />
[subtitlecomposer]<br />
Server = http://smoothware.net/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== xyne-x86_64 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "x86_64" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-x86_64]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== alucryd ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Various packages Maxime Gauduin maintains (or not) in the AUR.<br />
<br />
{{bc|<nowiki><br />
[alucryd]<br />
Server = http://pkgbuild.com/~alucryd/$repo/x86_64<br />
</nowiki>}}<br />
<br />
==== alucryd-multilib ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Various packages needed to run Steam without its runtime environment.<br />
<br />
{{bc|<nowiki><br />
[alucryd-multilib]<br />
Server = http://pkgbuild.com/~alucryd/$repo/x86_64<br />
</nowiki>}}<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' contains programs I'm using on many systems<br />
* '''Upstream page:''' http://andrwe.dyndns.org/doku.php/blog/repository {{Dead link|2013|11|30}}<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/x86_64<br />
</nowiki>}}<br />
<br />
==== archstudio ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Audio and Music Packages optimized for Intel Core i3, i5, and i7.<br />
* '''Upstream page:''' http://www.xsounds.org/~archstudio<br />
<br />
{{bc|<nowiki><br />
[archstudio]<br />
Server = http://www.xsounds.org/~archstudio/x86_64<br />
</nowiki>}}<br />
<br />
==== brtln ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bpiotrowski Bartłomiej Piotrowski]<br />
* '''Description:''' Some VCS packages.<br />
<br />
{{bc|<nowiki><br />
[brtln]<br />
Server = http://pkgbuild.com/~barthalion/brtln/$arch/<br />
</nowiki>}}<br />
<br />
==== jkanetwork ====<br />
<br />
* '''Maintainer:''' kprkpr <kevin01010 at gmail dot com><br />
* '''Maintainer:''' Joselucross <jlgarrido97 at gmail dot com><br />
* '''Description:''' Packages of AUR like pimagizer,stepmania,yaourt,linux-mainline,wps-office,grub-customizer,some IDE.. Open for all that wants to contribute<br />
* '''Upstream page:''' http://repo.jkanetwork.com/<br />
<br />
{{bc|<nowiki><br />
[jkanetwork]<br />
Server = http://repo.jkanetwork.com/repo/$repo/<br />
</nowiki>}}<br />
<br />
==== mazdlc ====<br />
<br />
* '''Maintainer:''' maz-1 <ohmygod19993 at gmail dot com><br />
* '''Description:''' Various packages maintained by maz-1 (mainly Qt5-based packages and multimedia-related packages )<br />
* '''Upstream page:''' https://build.opensuse.org/project/show/home:mazdlc<br />
<br />
{{bc|<nowiki><br />
[home_mazdlc_Arch_Extra]<br />
Server = http://download.opensuse.org/repositories/home:/mazdlc/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
==== mazdlc-deadbeef-plugins ====<br />
<br />
* '''Maintainer:''' maz-1 <ohmygod19993 at gmail dot com><br />
* '''Description:''' Plugins for the feature-rich music player DeaDBeeF.<br />
* '''Upstream page:''' https://build.opensuse.org/project/show/home:mazdlc<br />
<br />
{{bc|<nowiki><br />
[home_mazdlc_deadbeef-plugins_Arch_Extra]<br />
Server = http://download.opensuse.org/repositories/home:/mazdlc:/deadbeef-plugins/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
==== mazdlc-kde-frameworks-5 ====<br />
<br />
* '''Maintainer:''' maz-1 <ohmygod19993 at gmail dot com><br />
* '''Description:''' Unstable packages based on kde frameworks 5.<br />
* '''Upstream page:''' https://build.opensuse.org/project/show/home:mazdlc<br />
<br />
{{bc|<nowiki><br />
[home_mazdlc_kde-frameworks-5_Arch_Extra]<br />
Server = http://download.opensuse.org/repositories/home:/mazdlc:/kde-frameworks-5/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
==== mikroskeem ====<br />
<br />
* '''Maintainer:''' mikroskeem <mikroskeem@mikroskeem.eu><br />
* '''Description:''' Openarena, i3 wm, and neovim-related packages (do ''pacman -Sl mikroskeem'')<br />
<br />
{{bc|<nowiki><br />
[mikroskeem]<br />
Server = https://nightsnack.cf/~mark/arch-pkgs<br />
</nowiki>}}<br />
<br />
==== mingw-w64 ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/ant32 Philip] and [https://aur.archlinux.org/account/nic96 Jeromy] Reimer<br />
* '''Description:''' Almost all mingw-w64 packages in the AUR.<br />
<br />
{{bc|<nowiki><br />
[mingw-w64]<br />
Server = http://downloads.sourceforge.net/project/mingw-w64-archlinux/$arch<br />
#Server = http://amr.linuxd.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== pnsft-pur ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Japanese input method packages Mozc (vanilla) and libkkc<br />
<br />
{{bc|<nowiki><br />
[pnsft-pur]<br />
Server = http://downloads.sourceforge.net/project/pnsft-aur/pur/x86_64<br />
</nowiki>}}<br />
<br />
==== rakudo ====<br />
<br />
* '''Maintainer:''' spider-mario <spidermario@free.fr><br />
* '''Description:''' Rakudo Perl6<br />
<br />
{{bc|<nowiki><br />
[rakudo]<br />
Server = https://spider-mario.quantic-telecom.net/archlinux/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== rightlink ====<br />
<br />
* '''Maintainer:''' Chris Fordham <chris@fordham-nagy.id.au><br />
* '''Description:''' RightLink version 10 (RL10) is a new version of RightScale's server agent that connects servers managed through RightScale to the RightScale cloud management platform.<br />
<br />
{{bc|<nowiki><br />
[rightlink]<br />
Server = https://s3-ap-southeast-2.amazonaws.com/archlinux.rightscale.me/repo<br />
</nowiki>}}<br />
<br />
==== seiichiro ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' VDR and some plugins, mms, foo2zjs-drivers<br />
<br />
{{bc|<nowiki><br />
[seiichiro]<br />
Server = http://repo.seiichiro0185.org/x86_64<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== zrootfs ====<br />
<br />
* '''Maintainer:''' Isabell Cowan <isabellcowan@gmail.com><br />
* '''Description:''' For Haswell and Broadwell architecture processors with size in mind (out of date 2016-03-14).<br />
<br />
{{bc|<nowiki><br />
[zrootfs]<br />
Server = http://www.izzette.com/izzi/zrootfs-old<br />
</nowiki>}}<br />
<br />
== armv6h only ==<br />
<br />
=== Unsigned ===<br />
<br />
==== arch-fook-armv6h ====<br />
<br />
* '''Maintainer:''' Jaska Kivelä <jaska@kivela.net><br />
* '''Description:''' Stuff that I have compiled for my Raspberry PI. Including Enlightenment and home automation stuff.<br />
<br />
{{bc|<nowiki><br />
[arch-fook-armv6h]<br />
Server = http://kivela.net/jaska/arch-fook-armv6h<br />
</nowiki>}}<br />
<br />
== armv7h only ==<br />
<br />
=== Unsigned ===<br />
<br />
==== pietma ====<br />
<br />
* '''Maintainer:''' MartiMcFly <martimcfly@autorisation.de><br />
* '''Description:''' Arch User Repository packages [https://aur.archlinux.org/packages/?K=martimcfly&SeB=m I create or maintain.].<br />
* '''Upstream page:''' [http://pietma.com/tag/aur/ http://pietma.com/tag/aur/]<br />
{{bc|<nowiki><br />
[pietma]<br />
SigLevel = Optional TrustAll<br />
Server = http://repository.pietma.com/nexus/content/repositories/archlinux/$arch/$repo<br />
</nowiki>}}</div>Demizerhttps://wiki.archlinux.org/index.php?title=Unofficial_user_repositories&diff=432784Unofficial user repositories2016-04-25T06:21:02Z<p>Demizer: Remove demz-repo, no longer valid</p>
<hr />
<div>[[Category:Package management]]<br />
[[ja:非公式ユーザーリポジトリ]]<br />
[[zh-CN:Unofficial user repositories]]<br />
{{Expansion|Please fill in the missing information about repository maintainers.}}<br />
<br />
{{Related articles start}}<br />
{{Related|pacman-key}}<br />
{{Related|Official repositories}}<br />
{{Related articles end}}<br />
<br />
This article lists binary repositories freely created and shared by the community, often providing pre-built versions of PKGBUILDS found in the [[AUR]].<br />
<br />
{{Warning|Neither the official Arch Linux Developers nor the Trusted Users perform tests of any sort to verify the contents of these repositories; it is up to each user to decide whether to trust their maintainers, and take full responsibility for whatever their decision brings.}}<br />
<br />
In order to use these repositories, you will have to add them to {{ic|/etc/pacman.conf}}, as explained in [[pacman#Repositories and mirrors]]. If a repository is signed, you will have to obtain and locally sign the associated key, as explained in [[Pacman-key#Adding unofficial keys]].<br />
<br />
If you want to create your own custom repository, follow [[pacman tips#Custom local repository]].<br />
<br />
{{Tip|To get a list of all servers listed in this page: {{bc|<nowiki>curl 'https://wiki.archlinux.org/index.php/Unofficial_user_repositories' | grep 'Server = ' | sed "s/\$arch/$(uname -m)/g" | cut -f 3 -d' '</nowiki>}}<br />
<br />
For your convenience you can, for example, open them all in a web browser to inspect the contents of their repositories.<br />
}}<br />
<br />
== Adding your repository to this page ==<br />
<br />
If you have your own repository, please add it to this page, so that all the other users will know where to find your packages. Please keep the following rules when adding new repositories:<br />
<br />
* Keep the lists in alphabetical order.<br />
* Include some information about the maintainer: include at least a (nick)name and some form of contact information (web site, email address, user page on ArchWiki or the forums, etc.).<br />
* If the repository is of the ''signed'' variety, please include a key-id, possibly using it as the anchor for a link to its keyserver; if the key is not on a keyserver, include a link to the key file.<br />
* Include some short description (e.g. the category of packages provided in the repository).<br />
* If there is a page (either on ArchWiki or external) containing more information about the repository, include a link to it.<br />
* If possible, avoid using comments in code blocks. The formatted description is much more readable. Users who want some comments in their {{ic|pacman.conf}} can easily create it on their own.<br />
<br />
== Any ==<br />
<br />
"Any" repositories are architecture-independent. In other words, they can be used on both i686 and x86_64 systems.<br />
<br />
=== Signed ===<br />
<br />
==== infinality-bundle-fonts ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle-fonts repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-fonts]<br />
Server = http://bohoomil.com/repo/fonts<br />
</nowiki>}}<br />
<br />
==== ivasilev ====<br />
<br />
* '''Maintainer:''' [http://ivasilev.net Ianis G. Vasilev]<br />
* '''Description:''' A variety of packages, mostly my own software and AUR builds.<br />
* '''Upstream page:''' http://ivasilev.net/pacman<br />
* '''Key-ID:''' 436BB513<br />
<br />
{{Note|I mantain 'any', 'i686' and 'x86_64' repos. Each of them includes packages from 'any'. $arch can be replaced with any of the three}}<br />
<br />
{{bc|<nowiki><br />
[ivasilev]<br />
Server = http://ivasilev.net/pacman/any<br />
# Server = http://ivasilev.net/pacman/$arch<br />
</nowiki>}}<br />
<br />
==== pkgbuilder ====<br />
<br />
* '''Maintainer:''' [https://chriswarrick.com/ Chris Warrick]<br />
* '''Description:''' A repository for PKGBUILDer, a Python AUR helper.<br />
* '''Upstream page:''' https://github.com/Kwpolska/pkgbuilder<br />
* '''Key-ID:''' 5EAAEA16<br />
<br />
{{bc|<nowiki><br />
[pkgbuilder]<br />
Server = https://pkgbuilder-repo.chriswarrick.com/<br />
</nowiki>}}<br />
<br />
==== xyne-any ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for "any" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{Note|Use this repository only if there is no matching {{ic|[xyne-*]}} repository for your architecture.}}<br />
<br />
{{bc|<nowiki><br />
[xyne-any]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== archlinuxgr-any ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' The Hellenic (Greek) unofficial Arch Linux repository with many interesting packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-any]<br />
Server = http://archlinuxgr.tiven.org/archlinux/any<br />
</nowiki>}}<br />
<br />
== Both i686 and x86_64 ==<br />
<br />
Repositories with both i686 and x86_64 versions. The {{ic|$arch}} variable will be set automatically by pacman.<br />
<br />
=== Signed ===<br />
<br />
==== arcanisrepo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#arcanis arcanis]<br />
* '''Description:''' A repository with some AUR packages including packages from VCS<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[arcanisrepo]<br />
Server = ftp://repo.arcanis.me/repo/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxcn ====<br />
<br />
* '''Maintainers:''' [https://plus.google.com/+PhoenixNemo/ Phoenix Nemo (phoenixlzx)], Felix Yan (felixonmars, TU), [https://twitter.com/lilydjwg lilydjwg], and others<br />
* '''Description:''' Packages by the Chinese Arch Linux community (mostly signed)<br />
* '''Git Repo:''' https://github.com/archlinuxcn/repo<br />
* '''Mirrors:''' https://github.com/archlinuxcn/mirrorlist-repo (Mostly for users in mainland China)<br />
* '''Key-ID:''' Once the repo is added, ''archlinuxcn-keyring'' package must be installed before any other so you don't get errors about PGP signatures.<br />
<br />
{{bc|<nowiki><br />
[archlinuxcn]<br />
SigLevel = Optional TrustedOnly<br />
Server = http://repo.archlinuxcn.org/$arch<br />
## or use a CDN (beta)<br />
#Server = https://cdn.repo.archlinuxcn.org/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0|Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst]<br />
Server = http://catalyst.wirephire.com/repo/catalyst/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst-hd234k ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0|Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst-hd234k]<br />
Server = http://catalyst.wirephire.com/repo/catalyst-hd234k/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst-hd234k/$arch<br />
</nowiki>}}<br />
<br />
==== city ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bgyorgy Balló György]<br />
* '''Description:''' Experimental/unpopular packages.<br />
* '''Upstream page:''' http://pkgbuild.com/~bgyorgy/city.html<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[city]<br />
Server = http://pkgbuild.com/~bgyorgy/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== haskell-core ====<br />
<br />
See [[ArchHaskell#haskell-core]].<br />
<br />
==== haskell-happstack ====<br />
<br />
See [[ArchHaskell#haskell-happstack]].<br />
<br />
==== haskell-web ====<br />
<br />
See [[ArchHaskell#haskell-web]].<br />
<br />
==== infinality-bundle ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle main repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle]<br />
Server = http://bohoomil.com/repo/$arch<br />
</nowiki>}}<br />
<br />
==== ivasilev ====<br />
<br />
* '''Maintainer:''' [http://ivasilev.net Ianis G. Vasilev]<br />
* '''Description:''' A variety of packages, mostly my own software and AUR builds.<br />
* '''Upstream page:''' http://ivasilev.net/pacman<br />
* '''Key-ID:''' 436BB513<br />
<br />
{{Note|I mantain 'any', 'i686' and 'x86_64' repos. Each of them includes packages from 'any'. $arch can be replaced with any of the three}}<br />
<br />
{{bc|<nowiki><br />
[ivasilev]<br />
Server = http://ivasilev.net/pacman/$arch<br />
</nowiki>}}<br />
<br />
==== llvm-svn ====<br />
<br />
* '''Maintainer:''' [[User:Kerberizer|Luchesar V. ILIEV (kerberizer)]]<br />
* '''Description:''' [https://aur.archlinux.org/pkgbase/llvm-svn llvm-svn] and [https://aur.archlinux.org/pkgbase/lib32-llvm-svn lib32-llvm-svn] from AUR: the LLVM compiler infrastructure, the Clang frontend, and the tools associated with it<br />
* '''Key-ID:''' [https://sks-keyservers.net/pks/lookup?op=vindex&search=0x76563F75679E4525&fingerprint=on&exact=on 0x76563F75679E4525], fingerprint <tt>D16C F22D 27D1 091A 841C 4BE9 7656 3F75 679E 4525</tt><br />
<br />
{{bc|<nowiki><br />
[llvm-svn]<br />
Server = http://repos.uni-plovdiv.net/archlinux/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== metalgamer ====<br />
<br />
* '''Maintainer:''' [http://metalgamer.eu/ metalgamer]<br />
* '''Description:''' Packages I use and/or maintain on the AUR.<br />
* '''Key ID:''' F55313FB<br />
<br />
{{bc|<nowiki><br />
[metalgamer]<br />
Server = http://repo.metalgamer.eu/$arch<br />
</nowiki>}}<br />
<br />
==== miffe ====<br />
<br />
* '''Maintainer:''' [https://bbs.archlinux.org/profile.php?id=4059 miffe]<br />
* '''Description:''' AUR packages maintained by miffe, e.g. linux-mainline<br />
* '''Key ID:''' 313F5ABD<br />
<br />
{{bc|<nowiki><br />
[miffe]<br />
Server = http://arch.miffe.org/$arch/<br />
</nowiki>}}<br />
<br />
==== pipelight ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Pipelight and wine-compholio<br />
* '''Upstream page:''' [http://fds-team.de/ fds-team.de]<br />
* '''Key-ID:''' E49CC0415DC2D5CA<br />
* '''Keyfile:''' http://repos.fds-team.de/Release.key<br />
<br />
{{bc|<nowiki><br />
[pipelight]<br />
Server = http://repos.fds-team.de/stable/arch/$arch<br />
</nowiki>}}<br />
<br />
==== repo-ck ====<br />
<br />
* '''Maintainer:''' [[User:Graysky|graysky]]<br />
* '''Description:''' Kernel and modules with Brain Fuck Scheduler and all the goodies in the ck1 patch set.<br />
* '''Upstream page:''' [http://repo-ck.com repo-ck.com]<br />
* '''Wiki:''' [[repo-ck]]<br />
* '''Key-ID:''' 5EE46C4C<br />
<br />
{{bc|<nowiki><br />
[repo-ck]<br />
Server = http://repo-ck.com/$arch<br />
</nowiki>}}<br />
<br />
==== seblu ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/developers/#seblu Sébastien Luttringer]<br />
* '''Description:''' All seblu useful pre-built packages, some homemade (virtualbox-ext-oracle, linux-seblu-meta, bedup).<br />
* '''Key-ID:''' Not required, as maintainer is a Developer<br />
<br />
{{bc|<nowiki><br />
[seblu]<br />
Server = http://seblu.net/a/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== seiichiro ====<br />
<br />
* '''Maintainer:''' [https://www.seiichiro0185.org Stefan Brand (seiichiro0185)]<br />
* '''Description:''' AUR-packages I use frequently<br />
* '''Key-ID:''' 805517CC<br />
<br />
{{bc|<nowiki><br />
[seiichiro]<br />
Server = http://www.seiichiro0185.org/repo/$arch<br />
</nowiki>}}<br />
<br />
==== sergej-repo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#spupykin Sergej Pupykin]<br />
* '''Description:''' psi-plus, owncloud-git, ziproxy, android, MySQL, and other stuff. Some packages also available for armv7h.<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[sergej-repo]<br />
Server = http://repo.p5n.pp.ru/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== tredaelli-systemd ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#tredaelli Timothy Redaelli]<br />
* '''Description:''' systemd rebuilt with unofficial OpenVZ patch (kernel < 2.6.32-042stab111.1)<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|{{ic|[tredaelli-systemd]}} must be put before {{ic|[core]}} in {{ic|/etc/pacman.conf}}}}<br />
<br />
{{bc|<nowiki><br />
[tredaelli-systemd]<br />
Server = http://pkgbuild.com/~tredaelli/repo/systemd/$arch<br />
</nowiki>}}<br />
<br />
==== herecura ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/people/trusted-users/#idevolder Ike Devolder]<br />
* '''Description:''' additional packages not found in the ''community'' repository<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[herecura]<br />
Server = http://repo.herecura.be/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== blackeagle-pre-community ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/people/trusted-users/#idevolder Ike Devolder]<br />
* '''Description:''' testing of the by me maintaned packages before moving to ''community'' repository<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[blackeagle-pre-community]<br />
Server = http://repo.herecura.be/$repo/$arch<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== arch-deepin ====<br />
<br />
* '''Maintainer:''' [https://build.opensuse.org/project/show/home:metakcahura metak], [https://github.com/fasheng fasheng]<br />
* '''Description:''' Porting software from Linux Deepin to Archlinux.<br />
* '''Upstream page:''' https://github.com/fasheng/arch-deepin<br />
<br />
{{bc|<nowiki><br />
[home_metakcahura_arch-deepin_Arch_Extra]<br />
SigLevel = Never<br />
Server = http://download.opensuse.org/repositories/home:/metakcahura:/arch-deepin/Arch_Extra/$arch<br />
#Server = http://anorien.csc.warwick.ac.uk/mirrors/download.opensuse.org/repositories/home:/metakcahura:/arch-deepin/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
==== archaudio ====<br />
<br />
* '''Maintainer:''' [[User:Schivmeister|Ray Rashif]], [https://aur.archlinux.org/account/jhernberg Joakim Hernberg]<br />
* '''Description:''' Pro-audio packages<br />
<br />
{{bc|<nowiki><br />
[archaudio-production]<br />
Server = http://repos.archaudio.org/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxfr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
* '''Upstream page:''' http://afur.archlinux.fr<br />
<br />
{{bc|<nowiki><br />
[archlinuxfr]<br />
Server = http://repo.archlinux.fr/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgis ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Maintainers needed - low bandwidth<br />
<br />
{{bc|<nowiki><br />
[archlinuxgis]<br />
Server = http://archlinuxgis.no-ip.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr]<br />
Server = http://archlinuxgr.tiven.org/archlinux/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr-kde4 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' KDE4 packages (plasmoids, themes etc) provided by the Hellenic (Greek) Arch Linux community<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-kde4]<br />
Server = http://archlinuxgr.tiven.org/archlinux-kde4/$arch<br />
</nowiki>}}<br />
<br />
==== arsch ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' From users of orgizm.net<br />
<br />
{{bc|<nowiki><br />
[arsch]<br />
Server = http://arsch.orgizm.net/$arch<br />
</nowiki>}}<br />
<br />
==== cinnamon ====<br />
<br />
* '''Maintainer:''' [https://github.com/jnbek jnbek]<br />
* '''Description:''' Stable and actively developed Cinnamon packages (Applets, Themes, Extensions), plus others (Hotot, qBitTorrent, GTK themes, Perl modules, and more).<br />
<br />
{{bc|<nowiki><br />
[cinnamon]<br />
Server = http://archlinux.zoelife4u.org/cinnamon/$arch<br />
</nowiki>}}<br />
<br />
==== ede ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Equinox Desktop Environment repository<br />
<br />
{{bc|<nowiki><br />
[ede]<br />
Server = http://ede.elderlinux.org/repos/archlinux/$arch<br />
</nowiki>}}<br />
<br />
==== heftig ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#heftig Jan Steffens]<br />
* '''Description:''' Includes linux-zen and aurora (Firefox development build - works alongside {{Pkg|firefox}} in the ''extra'' repository).<br />
* '''Upstream page:''' https://bbs.archlinux.org/viewtopic.php?id=117157<br />
<br />
{{bc|<nowiki><br />
[heftig]<br />
Server = http://pkgbuild.com/~heftig/repo/$arch<br />
</nowiki>}}<br />
<br />
==== home_Minerva_W_Science_Arch_Extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' [[OpenFOAM]] packages.<br />
<br />
{{bc|<nowiki><br />
[home_Minerva_W_Science_Arch_Extra]<br />
SigLevel = Never<br />
Server = http://download.opensuse.org/repositories/home:/Minerva_W:/Science/Arch_Extra/$arch <br />
</nowiki>}}<br />
<br />
==== mesa-git ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/people/trusted-users/#lcarlier Laurent Carlier]<br />
* '''Description:''' Mesa git builds for the ''testing'' and ''multilib-testing'' repositories<br />
<br />
{{bc|<nowiki><br />
[mesa-git]<br />
Server = http://pkgbuild.com/~lcarlier/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== noware ====<br />
<br />
* '''Maintainer:''' Alexandru Thirtheu (alex_giusi_tiri2@yahoo.com) ([https://bbs.archlinux.org/profile.php?id=65036 Forums]) ([[User:AGT|Wiki]]) ([http://direct.noware.systems.:2 Web Site])<br />
* '''Description:''' Software which I prefer being present in a repository, than being compiled each time. It eases software maintenance, I find. Almost anything goes.<br />
<br />
{{bc|<nowiki><br />
[noware]<br />
Server = http://direct.$repo.systems.:2/repository/arch/$arch<br />
</nowiki>}}<br />
<br />
==== oracle ====<br />
<br />
* '''Maintainer:''' [[User:Malvineous|Malvineous]]<br />
* '''Description:''' Oracle database client<br />
<br />
{{Warning|By adding this you are agreeing to the Oracle license at http://www.oracle.com/technetwork/licenses/instant-client-lic-152016.html}}<br />
<br />
{{bc|<nowiki><br />
[oracle]<br />
Server = http://linux.shikadi.net/arch/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== pantheon ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing Pantheon-related packages<br />
<br />
{{bc|<nowiki><br />
[pantheon]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== paulburton-fitbitd ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains fitbitd for synchronizing FitBit trackers<br />
<br />
{{bc|<nowiki><br />
[paulburton-fitbitd]<br />
Server = http://www.paulburton.eu/arch/fitbitd/$arch<br />
</nowiki>}}<br />
<br />
==== pietma ====<br />
<br />
* '''Maintainer:''' MartiMcFly <martimcfly@autorisation.de><br />
* '''Description:''' Arch User Repository packages [https://aur.archlinux.org/packages/?K=martimcfly&SeB=m I create or maintain.].<br />
* '''Upstream page:''' http://pietma.com/tag/aur/<br />
<br />
{{bc|<nowiki><br />
[pietma]<br />
SigLevel = Optional TrustAll<br />
Server = http://repository.pietma.com/nexus/content/repositories/archlinux/$arch/$repo<br />
</nowiki>}}<br />
<br />
==== pfkernel ====<br />
<br />
* '''Maintainer:''' [[User:Nous|nous]]<br />
* '''Description:''' Generic and optimized binaries of the ARCH kernel patched with BFS, TuxOnIce, BFQ, Aufs3; i.e. linux-pf[-cpu] and linux-pf-lts[-cpu]. Also, openrc and initscripts-openrc.<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/11734958/x86_64/index.html or start at http://bit.do/linux-pf<br />
<br />
{{bc|<nowiki><br />
[pfkernel]<br />
Server = http://dl.dropbox.com/u/11734958/$arch<br />
</nowiki>}}<br />
<br />
==== suckless ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' suckless.org packages<br />
<br />
{{bc|<nowiki><br />
[suckless]<br />
Server = http://dl.suckless.org/arch/$arch<br />
</nowiki>}}<br />
<br />
==== trinity ====<br />
<br />
* '''Maintainer:''' [[User:Mmanley|Michael Manley]]<br />
* '''Description:''' [[Trinity]] Desktop Environment<br />
<br />
{{bc|<nowiki><br />
[trinity]<br />
Server = http://repo.nasutek.com/arch/contrib/trinity/$arch<br />
</nowiki>}}<br />
<br />
==== Unity-for-Arch ====<br />
<br />
* '''Maintainer:''' https://github.com/chenxiaolong<br />
* '''Description:''' [[Unity]] packages for Arch<br />
<br />
{{bc|<nowiki><br />
[Unity-for-Arch]<br />
SigLevel = Optional TrustAll<br />
Server = http://dl.dropbox.com/u/486665/Repos/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== Unity-for-Arch-Extra ====<br />
<br />
* '''Maintainer:''' https://github.com/chenxiaolong<br />
* '''Description:''' [[Unity]] extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[Unity-for-Arch-Extra]<br />
SigLevel = Optional TrustAll<br />
Server = http://dl.dropbox.com/u/486665/Repos/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== home_tarakbumba_archlinux_Arch_Extra_standard ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains a few pre-built AUR packages (zemberek, etc.)<br />
<br />
{{bc|<nowiki><br />
[home_tarakbumba_archlinux_Arch_Extra_standard]<br />
Server = http://download.opensuse.org/repositories/home:/tarakbumba:/archlinux/Arch_Extra_standard/$arch<br />
</nowiki>}}<br />
<br />
==== QOwnNotes ====<br />
<br />
* '''Maintainer:''' http://www.qownnotes.org<br />
* '''Description:''' QOwnNotes is a open source notepad and todo list manager with markdown support and [[ownCloud]] integration.<br />
<br />
{{bc|<nowiki><br />
[home_pbek_QOwnNotes_Arch_Extra]<br />
SigLevel = Optional TrustAll<br />
Server = http://download.opensuse.org/repositories/home:/pbek:/QOwnNotes/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
== i686 only ==<br />
<br />
=== Signed ===<br />
<br />
==== eee-ck ====<br />
<br />
* '''Maintainer:''' Gruppenpest<br />
* '''Description:''' Kernel and modules optimized for Asus Eee PC 701, with -ck patchset.<br />
* '''Key-ID:''' 27D4A19A<br />
* '''Keyfile''' http://zembla.duckdns.org/repo/gruppenpest.gpg<br />
<br />
{{bc|<nowiki><br />
[eee-ck]<br />
Server = http://zembla.duckdns.org/repo<br />
</nowiki>}}<br />
<br />
==== phillid ====<br />
<br />
* '''Maintainer:''' Phillid<br />
* '''Description:''' Various GCC-s and matching binutils-es which target bare-bones formats (for OS dev). The GCC toolchains are shrunk to ~8&nbsp;MiB each by disabling NLS and everything but the C front-end. Thrown in there is some ham-related stuff I use such as hamlib, xastir, qsstv. Also a couple of legacy packages which are a bit lengthy to build for most people (kdelibs3, qt3).<br />
* '''Key-ID:''' 28F1E6CE<br />
<br />
{{bc|<nowiki><br />
[phillid]<br />
Server = http://phillid.tk/r/i686/<br />
</nowiki>}}<br />
<br />
==== xyne-i686 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "i686" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-i686]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' each program I'm using on x86_64 is compiled for i686 too<br />
* '''Upstream page:''' http://andrwe.org/linux/repository<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/i686<br />
</nowiki>}}<br />
<br />
==== kpiche ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable OpenSync packages.<br />
<br />
{{bc|<nowiki><br />
[kpiche]<br />
Server = http://kpiche.archlinux.ca/repo<br />
</nowiki>}}<br />
<br />
==== kernel26-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 2.6.39<br />
<br />
{{bc|<nowiki><br />
[kernel26-pae]<br />
Server = http://kernel26-pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== linux-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 3.0<br />
<br />
{{bc|<nowiki><br />
[linux-pae]<br />
Server = http://pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== rfad ====<br />
<br />
* '''Maintainer:''' requiem [at] archlinux.us<br />
* '''Description:''' Repository made by haxit<br />
<br />
{{bc|<nowiki><br />
[rfad]<br />
Server = http://web.ncf.ca/ey723/archlinux/repo/<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/i686<br />
</nowiki>}}<br />
<br />
== x86_64 only ==<br />
<br />
=== Signed ===<br />
<br />
==== apathism ====<br />
<br />
* '''Maintainer:''' Ivan Koryabkin ([https://aur.archlinux.org/account/apathism/ apathism])<br />
* '''Upstream page:''' https://apathism.net/<br />
* '''Description:''' Some AUR packages like {{AUR|psi-plus-git}} (with qt5 enabled).<br />
* '''Key-ID:''' 3E37398D<br />
* '''Keyfile:''' http://apathism.net/archlinux/apathism.key<br />
<br />
{{bc|<nowiki><br />
[apathism]<br />
Server = http://apathism.net/archlinux/<br />
</nowiki>}}<br />
<br />
==== ashleyis ====<br />
<br />
* '''Maintainer:''' Ashley Towns ([https://aur.archlinux.org/account/ashleyis/ ashleyis])<br />
* '''Description:''' Debug versions of SDL, chipmunk, libtmx and other misc game libraries. also swift-lang and some other AUR packages <br />
* '''Key-ID:''' B1A4D311<br />
<br />
{{bc|<nowiki><br />
[ashleyis]<br />
Server = http://arch.ashleytowns.id.au/repo/$arch<br />
</nowiki>}}<br />
<br />
==== atom ====<br />
<br />
* '''Maintainer:''' Nicola Squartini ([https://github.com/tensor5 tensor5])<br />
* '''Upstream page:''' https://github.com/tensor5/arch-atom<br />
* '''Description:''' Atom text editor and Electron<br />
* '''Key-ID:''' B0544167<br />
<br />
{{bc|<nowiki><br />
[atom]<br />
Server = http://noaxiom.org/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== bioinformatics ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some software tools for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics]<br />
Server = http://decryptedepsilon.bl.ee/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== boyska64 ====<br />
<br />
* '''Maintainer:''' boyska<br />
* '''Description:''' Personal repository: cryptography, sdr, mail handling and misc<br />
* '''Key-ID:''' 0x7395DCAE58289CA9<br />
<br />
{{bc|<nowiki><br />
[boyska64]<br />
Server = http://boyska.degenerazione.xyz/archrepo<br />
</nowiki>}}<br />
<br />
==== coderkun-aur ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/coderkun/ coderkun]<br />
* '''Description:''' AUR packages with random software. Supporting package deltas and package and database signing.<br />
* '''Upstream page:''' https://www.coderkun.de/arch<br />
* '''Key-ID:''' A6BEE374<br />
* '''Keyfile:''' [https://www.coderkun.de/coderkun.asc https://www.coderkun.de/coderkun.asc]<br />
<br />
{{bc|<nowiki><br />
[coderkun-aur]<br />
Server = http://arch.coderkun.de/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== coderkun-aur-audio ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/coderkun/ coderkun]<br />
* '''Description:''' AUR packages with audio-related (realtime kernels, lv2-plugins, …) software. Supporting package deltas and package and database signing.<br />
* '''Upstream page:''' https://www.coderkun.de/arch<br />
* '''Key-ID:''' A6BEE374<br />
* '''Keyfile:''' [https://www.coderkun.de/coderkun.asc https://www.coderkun.de/coderkun.asc]<br />
<br />
{{bc|<nowiki><br />
[coderkun-aur-audio]<br />
Server = http://arch.coderkun.de/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== eatabrick ====<br />
<br />
* '''Maintainer:''' bentglasstube<br />
* '''Description:''' Packages for software written by (and a few just compiled by) bentglasstube.<br />
<br />
{{bc|<nowiki><br />
[eatabrick]<br />
SigLevel = Required<br />
Server = http://repo.eatabrick.org/$arch<br />
</nowiki>}}<br />
<br />
==== freifunk-rheinland ====<br />
<br />
* '''Maintainer:''' nomaster<br />
* '''Description:''' Packages for the Freifunk project: batman-adv, batctl, fastd and dependencies.<br />
<br />
{{bc|<nowiki><br />
[freifunk-rheinland]<br />
Server = http://mirror.fluxent.de/archlinux-custom/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== holo ====<br />
<br />
* '''Maintainer:''' Stefan Majewsky <holo-pacman@posteo.de> (please prefer to report issues at [https://github.com/majewsky/holo-pacman-repo/issues Github])<br />
* '''Description:''' Packages for [https://holocm.org Holo configuration management], including compatible plugins and tools.<br />
* '''Upstream page:''' https://github.com/majewsky/holo-pacman-repo<br />
* '''Package list:''' https://repo.holocm.org/archlinux/x86_64<br />
* '''Key-ID:''' 0xF7A9C9DC4631BD1A<br />
<br />
{{bc|<nowiki><br />
[holo]<br />
Server = https://repo.holocm.org/archlinux/x86_64<br />
</nowiki>}}<br />
<br />
==== Linux-pf ====<br />
<br />
{{Accuracy|Signed repositories should not use {{ic|1=SigLevel = Optional}} (by definition).}}<br />
<br />
* '''Maintainer:''' [[User:Thaodan|Thaodan]]<br />
* '''Description:''' Generic and optimized binaries of the ARCH kernel patched with BFS, TuxOnIce, BFQ, Aufs3; i.e. linux-pf, just like {{AUR|linux-pf}} from the [[AUR]] but additionally optimized for intel CPUs Sandy Bridge, Ivy Bridge, Haswell and generic of course, and some extra packages<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox).<br />
<br />
{{bc|<nowiki><br />
[Linux-pf]<br />
Server = https://dl.dropboxusercontent.com/u/172590784/Linux-pf/x86_64/<br />
SigLevel = Optional<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-multilib ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle multilib repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-multilib]<br />
Server = http://bohoomil.com/repo/multilib/$arch<br />
</nowiki>}}<br />
<br />
==== kc9ydn ====<br />
<br />
* '''Maintainer:''' [http://kc9ydn.us KC9YDN]<br />
* '''Description:''' Consists mostly of amateur radio related apps<br />
* '''Key-ID:''' 7DA25A0F<br />
<br />
{{bc|<nowiki><br />
[kc9ydn]<br />
Server = http://kc9ydn.us/repo/<br />
</nowiki>}}<br />
<br />
==== linux-lts-ck ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Current ArchLinux LTS kernel with the CK patch<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-lts-ck/x86_64/index.html or start at http://tiny.cc/linux-lts-ck<br />
<br />
{{bc|<nowiki><br />
[linux-lts-ck]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-lts-ck/$arch<br />
</nowiki>}}<br />
<br />
==== linux-lts31x ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Older LTS kernels (3.10 and 3.12 branch)<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-lts31x/x86_64/index.html or start at http://tiny.cc/linux-lts31x<br />
<br />
{{bc|<nowiki><br />
[linux-lts31x]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-lts31x/$arch<br />
</nowiki>}}<br />
<br />
==== linux-lts31x-ck ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Older LTS kernels (3.10 and 3.12 branch) with the CK patch<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-lts31x-ck/x86_64/index.html or start at http://tiny.cc/linux-lts31x-ck<br />
<br />
{{bc|<nowiki><br />
[linux-lts31x-ck]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-lts31x-ck/$arch<br />
</nowiki>}}<br />
<br />
==== linux-ck-pax ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Current Arch Kernel with the CK and PaX security patchsets<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-ck-pax/x86_64/index.html or start at http://tiny.cc/linux-ck-pax<br />
<br />
{{bc|<nowiki><br />
[linux-ck-pax]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-ck-pax/$arch<br />
</nowiki>}}<br />
<br />
==== linux-kalterfx ====<br />
<br />
* '''Maintainer''': Anna Ivanova ([https://aur.archlinux.org/account/kalterfive kalterfive])<br />
* '''Upstream page''': https://kalterfive.github.io/linux-kalterfx/about.html<br />
* '''Description''': A custom kernel with applied pf patchset and compiled fs/reiser4.<br />
* '''Key-ID''': A0C04F15<br />
* '''Keyfile''': https://keybase.io/kalterfive/key.asc<br />
<br />
{{bc|<nowiki><br />
[linux-kalterfx]<br />
Server = http://deadsoftware.ru/files/linux-kalterfx/repo/$arch<br />
</nowiki>}}<br />
<br />
==== linux-tresor ====<br />
<br />
* '''Maintainer:''' Claire Farron [https://aur.archlinux.org/account/clfarron4 clfarron4]<br />
* '''Description:''' Arch Current and LTS kernels with TRESOR<br />
* '''Key-ID:''' E6366A92<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/298301785/arch/linux-tresor/x86_64/index.html or start at http://tiny.cc/linux-tresor<br />
<br />
{{bc|<nowiki><br />
[linux-tresor]<br />
Server = http://dl.dropbox.com/u/298301785/arch/linux-tresor/$arch<br />
</nowiki>}}<br />
<br />
==== nullptr_t ====<br />
<br />
* '''Maintainers:''' nullptr_t, <br />
* '''Description:''' Cherry-picked non-properitary packages and admin tools from AUR (e.g. [[plymouth]], nemo-extensions and a few more)<br />
* '''Comment:''' Down until packaging is explicitly allowed by all GPL Licenses.<br />
* '''Key-ID:''' B4767A17CEC5B4E9<br />
<br />
{{bc|<nowiki><br />
[nullptr_t]<br />
Server = https://archlinux.0ptr.de/mirrors/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== markzz ====<br />
<br />
* '''Maintainer:''' [[User:Markzz|Mark Weiman (markzz)]]<br />
* '''Description:''' Packages that markzz maintains or uses on the AUR; this includes Linux with the vfio patchset ({{AUR|linux-vfio}} and {{AUR|linux-vfio-lts}}), and packages to maintain a Debian package repository.<br />
* '''Sources:''' http://git.markzz.net/markzz/repositories/markzz.git/tree<br />
* '''Key ID:''' 3CADDFDD<br />
<br />
{{Note|If you want to add the key by installing the ''markzz-keyring'' package, temporarily add {{ic|1=SigLevel = Never}} into the repository section.}}<br />
<br />
{{bc|<nowiki><br />
[markzz]<br />
Server = http://repo.markzz.com/arch/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== qt-debug ====<br />
<br />
* '''Maintainer:''' [http://blog.the-compiler.org/?page_id=36 The Compiler]<br />
* '''Description:''' Qt/PyQt builds with debug symbols<br />
* '''Upstream page:''' https://github.com/The-Compiler/qt-debug-pkgbuild<br />
* '''Key-ID:''' D6A1C70FE80A0C82<br />
<br />
{{bc|<nowiki><br />
[qt-debug]<br />
Server = http://qutebrowser.org/qt-debug/$arch<br />
</nowiki>}}<br />
<br />
==== quarry ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/developers/#anatolik anatolik]<br />
* '''Description:''' Arch binary repository for [http://rubygems.org/ Rubygems] packages. See [https://bbs.archlinux.org/viewtopic.php?id=182729 forum announcement] for more information.<br />
* '''Sources:''' https://github.com/anatol/quarry<br />
* '''Key-ID:''' Not needed, as maintainer is a developer<br />
<br />
{{bc|<nowiki><br />
[quarry]<br />
Server = http://pkgbuild.com/~anatolik/quarry/x86_64/<br />
</nowiki>}}<br />
<br />
==== rstudio ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/unikum/ Artem Klevtsov]<br />
* '''Description:''' Rstudio IDE package (git version) and depends.<br />
* '''Key-ID:''' 1CB48DD4<br />
<br />
{{bc|<nowiki><br />
[rstudio]<br />
Server = http://repo.psylab.info/archlinux/x86_64/<br />
</nowiki>}}<br />
<br />
==== siosm-aur ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages also available in the Arch User Repository, sometimes with minor fixes<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-aur]<br />
Server = http://siosm.fr/repo/$repo/<br />
</nowiki>}}<br />
<br />
==== subtitlecomposer ====<br />
<br />
* '''Maintainer:''' Mladen Milinkovic (maxrd2)<br />
* '''Description:''' Subtitle Composer stable and nightly builds<br />
* '''Upstream page:''' https://github.com/maxrd2/subtitlecomposer<br />
* '''Key-ID:''' EF9D9B26<br />
<br />
{{bc|<nowiki><br />
[subtitlecomposer]<br />
Server = http://smoothware.net/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== xyne-x86_64 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "x86_64" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-x86_64]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== alucryd ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Various packages Maxime Gauduin maintains (or not) in the AUR.<br />
<br />
{{bc|<nowiki><br />
[alucryd]<br />
Server = http://pkgbuild.com/~alucryd/$repo/x86_64<br />
</nowiki>}}<br />
<br />
==== alucryd-multilib ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Various packages needed to run Steam without its runtime environment.<br />
<br />
{{bc|<nowiki><br />
[alucryd-multilib]<br />
Server = http://pkgbuild.com/~alucryd/$repo/x86_64<br />
</nowiki>}}<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' contains programs I'm using on many systems<br />
* '''Upstream page:''' http://andrwe.dyndns.org/doku.php/blog/repository {{Dead link|2013|11|30}}<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/x86_64<br />
</nowiki>}}<br />
<br />
==== archstudio ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Audio and Music Packages optimized for Intel Core i3, i5, and i7.<br />
* '''Upstream page:''' http://www.xsounds.org/~archstudio<br />
<br />
{{bc|<nowiki><br />
[archstudio]<br />
Server = http://www.xsounds.org/~archstudio/x86_64<br />
</nowiki>}}<br />
<br />
==== brtln ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bpiotrowski Bartłomiej Piotrowski]<br />
* '''Description:''' Some VCS packages.<br />
<br />
{{bc|<nowiki><br />
[brtln]<br />
Server = http://pkgbuild.com/~barthalion/brtln/$arch/<br />
</nowiki>}}<br />
<br />
==== jkanetwork ====<br />
<br />
* '''Maintainer:''' kprkpr <kevin01010 at gmail dot com><br />
* '''Maintainer:''' Joselucross <jlgarrido97 at gmail dot com><br />
* '''Description:''' Packages of AUR like pimagizer,stepmania,yaourt,linux-mainline,wps-office,grub-customizer,some IDE.. Open for all that wants to contribute<br />
* '''Upstream page:''' http://repo.jkanetwork.com/<br />
<br />
{{bc|<nowiki><br />
[jkanetwork]<br />
Server = http://repo.jkanetwork.com/repo/$repo/<br />
</nowiki>}}<br />
<br />
==== mazdlc ====<br />
<br />
* '''Maintainer:''' maz-1 <ohmygod19993 at gmail dot com><br />
* '''Description:''' Various packages maintained by maz-1 (mainly Qt5-based packages and multimedia-related packages )<br />
* '''Upstream page:''' https://build.opensuse.org/project/show/home:mazdlc<br />
<br />
{{bc|<nowiki><br />
[home_mazdlc_Arch_Extra]<br />
Server = http://download.opensuse.org/repositories/home:/mazdlc/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
==== mazdlc-deadbeef-plugins ====<br />
<br />
* '''Maintainer:''' maz-1 <ohmygod19993 at gmail dot com><br />
* '''Description:''' Plugins for the feature-rich music player DeaDBeeF.<br />
* '''Upstream page:''' https://build.opensuse.org/project/show/home:mazdlc<br />
<br />
{{bc|<nowiki><br />
[home_mazdlc_deadbeef-plugins_Arch_Extra]<br />
Server = http://download.opensuse.org/repositories/home:/mazdlc:/deadbeef-plugins/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
==== mazdlc-kde-frameworks-5 ====<br />
<br />
* '''Maintainer:''' maz-1 <ohmygod19993 at gmail dot com><br />
* '''Description:''' Unstable packages based on kde frameworks 5.<br />
* '''Upstream page:''' https://build.opensuse.org/project/show/home:mazdlc<br />
<br />
{{bc|<nowiki><br />
[home_mazdlc_kde-frameworks-5_Arch_Extra]<br />
Server = http://download.opensuse.org/repositories/home:/mazdlc:/kde-frameworks-5/Arch_Extra/$arch<br />
</nowiki>}}<br />
<br />
==== mikroskeem ====<br />
<br />
* '''Maintainer:''' mikroskeem <mikroskeem@mikroskeem.eu><br />
* '''Description:''' Openarena, i3 wm, and neovim-related packages (do ''pacman -Sl mikroskeem'')<br />
<br />
{{bc|<nowiki><br />
[mikroskeem]<br />
Server = https://nightsnack.cf/~mark/arch-pkgs<br />
</nowiki>}}<br />
<br />
==== mingw-w64 ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/ant32 Philip] and [https://aur.archlinux.org/account/nic96 Jeromy] Reimer<br />
* '''Description:''' Almost all mingw-w64 packages in the AUR.<br />
<br />
{{bc|<nowiki><br />
[mingw-w64]<br />
Server = http://downloads.sourceforge.net/project/mingw-w64-archlinux/$arch<br />
#Server = http://amr.linuxd.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== pnsft-pur ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Japanese input method packages Mozc (vanilla) and libkkc<br />
<br />
{{bc|<nowiki><br />
[pnsft-pur]<br />
Server = http://downloads.sourceforge.net/project/pnsft-aur/pur/x86_64<br />
</nowiki>}}<br />
<br />
==== rakudo ====<br />
<br />
* '''Maintainer:''' spider-mario <spidermario@free.fr><br />
* '''Description:''' Rakudo Perl6<br />
<br />
{{bc|<nowiki><br />
[rakudo]<br />
Server = https://spider-mario.quantic-telecom.net/archlinux/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== rightlink ====<br />
<br />
* '''Maintainer:''' Chris Fordham <chris@fordham-nagy.id.au><br />
* '''Description:''' RightLink version 10 (RL10) is a new version of RightScale's server agent that connects servers managed through RightScale to the RightScale cloud management platform.<br />
<br />
{{bc|<nowiki><br />
[rightlink]<br />
Server = https://s3-ap-southeast-2.amazonaws.com/archlinux.rightscale.me/repo<br />
</nowiki>}}<br />
<br />
==== seiichiro ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' VDR and some plugins, mms, foo2zjs-drivers<br />
<br />
{{bc|<nowiki><br />
[seiichiro]<br />
Server = http://repo.seiichiro0185.org/x86_64<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== zrootfs ====<br />
<br />
* '''Maintainer:''' Isabell Cowan <isabellcowan@gmail.com><br />
* '''Description:''' For Haswell and Broadwell architecture processors with size in mind (out of date 2016-03-14).<br />
<br />
{{bc|<nowiki><br />
[zrootfs]<br />
Server = http://www.izzette.com/izzi/zrootfs-old<br />
</nowiki>}}<br />
<br />
== armv6h only ==<br />
<br />
=== Unsigned ===<br />
<br />
==== arch-fook-armv6h ====<br />
<br />
* '''Maintainer:''' Jaska Kivelä <jaska@kivela.net><br />
* '''Description:''' Stuff that I have compiled for my Raspberry PI. Including Enlightenment and home automation stuff.<br />
<br />
{{bc|<nowiki><br />
[arch-fook-armv6h]<br />
Server = http://kivela.net/jaska/arch-fook-armv6h<br />
</nowiki>}}<br />
<br />
== armv7h only ==<br />
<br />
=== Unsigned ===<br />
<br />
==== pietma ====<br />
<br />
* '''Maintainer:''' MartiMcFly <martimcfly@autorisation.de><br />
* '''Description:''' Arch User Repository packages [https://aur.archlinux.org/packages/?K=martimcfly&SeB=m I create or maintain.].<br />
* '''Upstream page:''' [http://pietma.com/tag/aur/ http://pietma.com/tag/aur/]<br />
{{bc|<nowiki><br />
[pietma]<br />
SigLevel = Optional TrustAll<br />
Server = http://repository.pietma.com/nexus/content/repositories/archlinux/$arch/$repo<br />
</nowiki>}}</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320481ZFS2014-06-16T23:51:12Z<p>Demizer: /* Create a storage pool */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives 1.15 How does ZFS on Linux handles Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced format disks ===<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320464ZFS2014-06-16T19:53:49Z<p>Demizer: Split Create a storage pool into subsections.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. To maintain compatibility with legacy systems Advanced Format disks emulate a sector size of 512 bytes when reported to ZFS at pool creation causing the pool to be created with an ashift not equal to 12. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives 1.15 How does ZFS on Linux handles Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced format disks ===<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320463ZFS2014-06-16T19:50:07Z<p>Demizer: Correct warning with proper cause of ashift for AF disks</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. To maintain compatibility with legacy systems Advanced Format disks emulate a sector size of 512 bytes when reported to ZFS at pool creation causing the pool to be created with an ashift not equal to 12. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?], [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives 1.15 How does ZFS on Linux handles Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320462ZFS2014-06-16T19:42:03Z<p>Demizer: /* Swap volume */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|Some disk firmwares misrepresent the physical sector size in reporting to ZFS. For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320461ZFS2014-06-16T19:41:33Z<p>Demizer: Fix code style to use one space.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|Some disk firmwares misrepresent the physical sector size in reporting to ZFS. For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320460ZFS2014-06-16T19:32:13Z<p>Demizer: /* No hostid found */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|Some disk firmwares misrepresent the physical sector size in reporting to ZFS. For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
{{bc|# zpool export bigdata}}<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
{{bc|<nowiki>zfs.zfs_arc_max=536870912</nowiki> # (for 512MB)}}<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
{{bc|/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition}}<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
{{bc|ZFS: No hostid found on kernel command line or /etc/hostid.}}<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f}}<br />
<br />
but in this instance, the following error is produced:<br />
<br />
{{bc|cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment}}<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o <nowiki>ashift=9</nowiki> -f}}<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320459ZFS2014-06-16T19:31:48Z<p>Demizer: /* Does not contain an EFI label */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|Some disk firmwares misrepresent the physical sector size in reporting to ZFS. For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
{{bc|# zpool export bigdata}}<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
{{bc|<nowiki>zfs.zfs_arc_max=536870912</nowiki> # (for 512MB)}}<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
{{bc|/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition}}<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f}}<br />
<br />
but in this instance, the following error is produced:<br />
<br />
{{bc|cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment}}<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o <nowiki>ashift=9</nowiki> -f}}<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320458ZFS2014-06-16T19:31:26Z<p>Demizer: /* ZFS is using too much RAM */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|Some disk firmwares misrepresent the physical sector size in reporting to ZFS. For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
{{bc|# zpool export bigdata}}<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
{{bc|<nowiki>zfs.zfs_arc_max=536870912</nowiki> # (for 512MB)}}<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f}}<br />
<br />
but in this instance, the following error is produced:<br />
<br />
{{bc|cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment}}<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o <nowiki>ashift=9</nowiki> -f}}<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320457ZFS2014-06-16T19:31:01Z<p>Demizer: /* ZFS is using too much RAM */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|Some disk firmwares misrepresent the physical sector size in reporting to ZFS. For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
{{bc|# zpool export bigdata}}<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
{{bc|zfs.zfs_arc_max=536870912 # (for 512MB)}}<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f}}<br />
<br />
but in this instance, the following error is produced:<br />
<br />
{{bc|cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment}}<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o <nowiki>ashift=9</nowiki> -f}}<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320456ZFS2014-06-16T19:30:31Z<p>Demizer: /* Export a storage pool */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|Some disk firmwares misrepresent the physical sector size in reporting to ZFS. For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
{{bc|# zpool export bigdata}}<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f}}<br />
<br />
but in this instance, the following error is produced:<br />
<br />
{{bc|cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment}}<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o <nowiki>ashift=9</nowiki> -f}}<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320452ZFS2014-06-16T19:23:54Z<p>Demizer: /* Devices have different sector alignment */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|Some disk firmwares misrepresent the physical sector size in reporting to ZFS. For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f}}<br />
<br />
but in this instance, the following error is produced:<br />
<br />
{{bc|cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment}}<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o <nowiki>ashift=9</nowiki> -f}}<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320450ZFS2014-06-16T19:22:37Z<p>Demizer: /* Create a storage pool */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
{{Warning|Some disk firmwares misrepresent the physical sector size in reporting to ZFS. For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f}}<br />
<br />
but in this instance, the following error is produced:<br />
<br />
{{bc|cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment}}<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, and ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o <nowiki>ashift=9</nowiki> -f}}<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=320446ZFS2014-06-16T19:10:54Z<p>Demizer: Add troubleshooting section for devices with different sector alignment.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Experimenting with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|zfs-utils-git}} and {{AUR|spl-git}} as a dependency, which in turn has {{AUR|spl-utils-git}} as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices) }}<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
Many parameters are available for zfs file systems, you can view a full list with <code>zfs get all <pool></code>. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
=== Database ===<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data '''twice''' to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with <code>fsync</code> or <code>O_SYNC</code>) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== zvols ===<br />
zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
# zfs umount -a<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f}}<br />
<br />
but in this instance, the following error is produced:<br />
<br />
{{bc|cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment}}<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, and ashift of 12 is recommended for best performance. See [http://zfsonlinux.org/faq.html#PerformanceConsideration 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
{{bc|# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o <nowiki>ashift=9</nowiki> -f}}<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0}}<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinicpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible.<br />
To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
# zfs create -o compression=off \<br />
-o dedup=off \<br />
-o mountpoint=/home/<username> \<br />
<zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Redshift&diff=312808Redshift2014-05-01T02:33:59Z<p>Demizer: Add missing optional dependencies.</p>
<hr />
<div>[[Category:X Server]]<br />
[[Category:Graphics]]<br />
[[Category:Eye candy]]<br />
[[Category:Audio/Video]]<br />
From the [http://jonls.dk/redshift/ redshift project web page]:<br />
<br />
:''Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night. This program is inspired by [http://justgetflux.com f.lux] [...].''<br />
<br />
The project is developed on [https://github.com/jonls/redshift GitHub].<br />
<br />
== Installation ==<br />
<br />
The {{Pkg|redshift}} package is available in the [[Official repositories]].<br />
<br />
=== Desktop environments ===<br />
<br />
For desktop environments, the {{ic|redshift-gtk}} command is installed with the {{Pkg|redshift}} package. redshift-gtk provides a system tray icon for controlling redshift. redshift-gtk requires an optional dependencies {{Pkg|python-gobject}}, {{Pkg|python-xdg}}, and {{Pkg|librsvg}} available from the [[Official repositories]].<br />
<br />
To autostart redshift-gtk on startup, right click the system tray icon and select 'Autostart'.<br />
<br />
== Configuration ==<br />
<br />
Redshift will at least need your location to start, meaning the latitude and longitude of your location. Redshift employs several routines for obtaining your location. If none of them works (e.g. none of the used helper programs is installed), you need to enter your location manually: For most places/cities an easy way is to look up the wikipedia page of that place and get the location from there (search the page for "coordinates").<br />
<br />
=== Quick start ===<br />
<br />
To just get it up and running with a basic setup, issue:<br />
<br />
$ redshift -l LAT:LON<br />
<br />
where LAT is the latitude and LON is the longitude of your location.<br />
<br />
=== Automatic location based on GPS ===<br />
<br />
You can also use {{Pkg|gpsd}} to automatically determine your GPS location and use it as an input for Redshift. Create the following script and pass {{ic|$lat}} and {{ic|$lon}} to {{ic|redshift -l $lat;$lon}}:<br />
<br />
#!/bin/bash<br />
date<br />
#gpsdata=$( gpspipe -w -n 10 | grep -m 1 lon )<br />
gpsdata=$( gpspipe -w | grep -m 1 TPV )<br />
lat=$( echo "$gpsdata" | jsawk 'return this.lat' )<br />
lon=$( echo "$gpsdata" | jsawk 'return this.lon' )<br />
alt=$( echo "$gpsdata" | jsawk 'return this.alt' )<br />
dt=$( echo "$gpsdata" | jsawk 'return this.time' )<br />
echo "$dt"<br />
echo "You are here: $lat, $lon at $alt"<br />
<br />
For more information, see [https://bbs.archlinux.org/viewtopic.php?pid=1389735#p1389735 this] forums thread.<br />
<br />
=== Manual setup ===<br />
<br />
Redshift reads the configuration file {{ic|~/.config/redshift.conf}}, if it exists. However, Redshift does not create that configuration file, so you have to create it manually.<br />
Example for Hamburg/Germany:<br />
<br />
{{hc|~/.config/redshift.conf|<br />
; Global settings<br />
[redshift]<br />
temp-day&#61;5700<br />
temp-night&#61;3500<br />
transition&#61;1<br />
gamma&#61;0.8:0.7:0.8<br />
location-provider&#61;manual<br />
adjustment-method&#61;vidmode<br />
<br />
; The location provider and adjustment method settings<br />
; are in their own sections.<br />
[manual]<br />
; Hamburg<br />
lat&#61;53.3<br />
lon&#61;10.0<br />
<br />
; In this example screen 1 is adjusted by vidmode. Note<br />
; that the numbering starts from 0, so this is actually<br />
; the second screen.<br />
[vidmode]<br />
screen&#61;0<br />
screen&#61;1<br />
}}<br />
<br />
== Troubleshooting ==<br />
<br />
=== redshift-gtk will not start ===<br />
<br />
redshift-gtk requires optional dependencies to work correctly. To verify any missing dependencies, run {{ic|redshift-gtk}} in the command line. Similar output to the following would be produced:<br />
<br />
Traceback (most recent call last):<br />
File "/usr/bin/redshift-gtk", line 26, in <module><br />
from redshift_gtk.statusicon import run<br />
File "/usr/lib/python3.4/site-packages/redshift_gtk/statusicon.py", line 31, in <module><br />
from gi.repository import Gtk, GLib<br />
ImportError: No module named 'gi.repository'<br />
<br />
If this is the case, installing {{Pkg|python-gobject}}, {{Pkg|python-xdg}}, and {{Pkg|librsvg}} from the [[Official repositories]] would solve this issue.<br />
<br />
== See also ==<br />
* [http://jonls.dk/redshift Redshift website]<br />
* [https://launchpad.net/redshift Redshift on launchpad]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Redshift&diff=312807Redshift2014-05-01T02:32:35Z<p>Demizer: Remove note about redshift not starting.</p>
<hr />
<div>[[Category:X Server]]<br />
[[Category:Graphics]]<br />
[[Category:Eye candy]]<br />
[[Category:Audio/Video]]<br />
From the [http://jonls.dk/redshift/ redshift project web page]:<br />
<br />
:''Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night. This program is inspired by [http://justgetflux.com f.lux] [...].''<br />
<br />
The project is developed on [https://github.com/jonls/redshift GitHub].<br />
<br />
== Installation ==<br />
<br />
The {{Pkg|redshift}} package is available in the [[Official repositories]].<br />
<br />
=== Desktop environments ===<br />
<br />
For desktop environments, the {{ic|redshift-gtk}} command is installed with the {{Pkg|redshift}} package. redshift-gtk provides a system tray icon for controlling redshift. redshift-gtk requires an optional dependency {{Pkg|python-gobject}} available from the [[Official repositories]].<br />
<br />
To autostart redshift-gtk on startup, right click the system tray icon and select 'Autostart'.<br />
<br />
== Configuration ==<br />
<br />
Redshift will at least need your location to start, meaning the latitude and longitude of your location. Redshift employs several routines for obtaining your location. If none of them works (e.g. none of the used helper programs is installed), you need to enter your location manually: For most places/cities an easy way is to look up the wikipedia page of that place and get the location from there (search the page for "coordinates").<br />
<br />
=== Quick start ===<br />
<br />
To just get it up and running with a basic setup, issue:<br />
<br />
$ redshift -l LAT:LON<br />
<br />
where LAT is the latitude and LON is the longitude of your location.<br />
<br />
=== Automatic location based on GPS ===<br />
<br />
You can also use {{Pkg|gpsd}} to automatically determine your GPS location and use it as an input for Redshift. Create the following script and pass {{ic|$lat}} and {{ic|$lon}} to {{ic|redshift -l $lat;$lon}}:<br />
<br />
#!/bin/bash<br />
date<br />
#gpsdata=$( gpspipe -w -n 10 | grep -m 1 lon )<br />
gpsdata=$( gpspipe -w | grep -m 1 TPV )<br />
lat=$( echo "$gpsdata" | jsawk 'return this.lat' )<br />
lon=$( echo "$gpsdata" | jsawk 'return this.lon' )<br />
alt=$( echo "$gpsdata" | jsawk 'return this.alt' )<br />
dt=$( echo "$gpsdata" | jsawk 'return this.time' )<br />
echo "$dt"<br />
echo "You are here: $lat, $lon at $alt"<br />
<br />
For more information, see [https://bbs.archlinux.org/viewtopic.php?pid=1389735#p1389735 this] forums thread.<br />
<br />
=== Manual setup ===<br />
<br />
Redshift reads the configuration file {{ic|~/.config/redshift.conf}}, if it exists. However, Redshift does not create that configuration file, so you have to create it manually.<br />
Example for Hamburg/Germany:<br />
<br />
{{hc|~/.config/redshift.conf|<br />
; Global settings<br />
[redshift]<br />
temp-day&#61;5700<br />
temp-night&#61;3500<br />
transition&#61;1<br />
gamma&#61;0.8:0.7:0.8<br />
location-provider&#61;manual<br />
adjustment-method&#61;vidmode<br />
<br />
; The location provider and adjustment method settings<br />
; are in their own sections.<br />
[manual]<br />
; Hamburg<br />
lat&#61;53.3<br />
lon&#61;10.0<br />
<br />
; In this example screen 1 is adjusted by vidmode. Note<br />
; that the numbering starts from 0, so this is actually<br />
; the second screen.<br />
[vidmode]<br />
screen&#61;0<br />
screen&#61;1<br />
}}<br />
<br />
== Troubleshooting ==<br />
<br />
=== redshift-gtk will not start ===<br />
<br />
redshift-gtk requires optional dependencies to work correctly. To verify any missing dependencies, run {{ic|redshift-gtk}} in the command line. Similar output to the following would be produced:<br />
<br />
Traceback (most recent call last):<br />
File "/usr/bin/redshift-gtk", line 26, in <module><br />
from redshift_gtk.statusicon import run<br />
File "/usr/lib/python3.4/site-packages/redshift_gtk/statusicon.py", line 31, in <module><br />
from gi.repository import Gtk, GLib<br />
ImportError: No module named 'gi.repository'<br />
<br />
If this is the case, installing {{Pkg|python-gobject}}, {{Pkg|python-xdg}}, and {{Pkg|librsvg}} from the [[Official repositories]] would solve this issue.<br />
<br />
== See also ==<br />
* [http://jonls.dk/redshift Redshift website]<br />
* [https://launchpad.net/redshift Redshift on launchpad]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Redshift&diff=312806Redshift2014-05-01T02:32:09Z<p>Demizer: Remove newline in introduction.</p>
<hr />
<div>[[Category:X Server]]<br />
[[Category:Graphics]]<br />
[[Category:Eye candy]]<br />
[[Category:Audio/Video]]<br />
From the [http://jonls.dk/redshift/ redshift project web page]:<br />
<br />
:''Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night. This program is inspired by [http://justgetflux.com f.lux] [...].''<br />
<br />
The project is developed on [https://github.com/jonls/redshift GitHub].<br />
<br />
== Installation ==<br />
<br />
The {{Pkg|redshift}} package is available in the [[Official repositories]].<br />
<br />
{{Note|If Redshift will not start, see section [[#Troubleshooting]].}}<br />
<br />
=== Desktop environments ===<br />
<br />
For desktop environments, the {{ic|redshift-gtk}} command is installed with the {{Pkg|redshift}} package. redshift-gtk provides a system tray icon for controlling redshift. redshift-gtk requires an optional dependency {{Pkg|python-gobject}} available from the [[Official repositories]].<br />
<br />
To autostart redshift-gtk on startup, right click the system tray icon and select 'Autostart'.<br />
<br />
== Configuration ==<br />
<br />
Redshift will at least need your location to start, meaning the latitude and longitude of your location. Redshift employs several routines for obtaining your location. If none of them works (e.g. none of the used helper programs is installed), you need to enter your location manually: For most places/cities an easy way is to look up the wikipedia page of that place and get the location from there (search the page for "coordinates").<br />
<br />
=== Quick start ===<br />
<br />
To just get it up and running with a basic setup, issue:<br />
<br />
$ redshift -l LAT:LON<br />
<br />
where LAT is the latitude and LON is the longitude of your location.<br />
<br />
=== Automatic location based on GPS ===<br />
<br />
You can also use {{Pkg|gpsd}} to automatically determine your GPS location and use it as an input for Redshift. Create the following script and pass {{ic|$lat}} and {{ic|$lon}} to {{ic|redshift -l $lat;$lon}}:<br />
<br />
#!/bin/bash<br />
date<br />
#gpsdata=$( gpspipe -w -n 10 | grep -m 1 lon )<br />
gpsdata=$( gpspipe -w | grep -m 1 TPV )<br />
lat=$( echo "$gpsdata" | jsawk 'return this.lat' )<br />
lon=$( echo "$gpsdata" | jsawk 'return this.lon' )<br />
alt=$( echo "$gpsdata" | jsawk 'return this.alt' )<br />
dt=$( echo "$gpsdata" | jsawk 'return this.time' )<br />
echo "$dt"<br />
echo "You are here: $lat, $lon at $alt"<br />
<br />
For more information, see [https://bbs.archlinux.org/viewtopic.php?pid=1389735#p1389735 this] forums thread.<br />
<br />
=== Manual setup ===<br />
<br />
Redshift reads the configuration file {{ic|~/.config/redshift.conf}}, if it exists. However, Redshift does not create that configuration file, so you have to create it manually.<br />
Example for Hamburg/Germany:<br />
<br />
{{hc|~/.config/redshift.conf|<br />
; Global settings<br />
[redshift]<br />
temp-day&#61;5700<br />
temp-night&#61;3500<br />
transition&#61;1<br />
gamma&#61;0.8:0.7:0.8<br />
location-provider&#61;manual<br />
adjustment-method&#61;vidmode<br />
<br />
; The location provider and adjustment method settings<br />
; are in their own sections.<br />
[manual]<br />
; Hamburg<br />
lat&#61;53.3<br />
lon&#61;10.0<br />
<br />
; In this example screen 1 is adjusted by vidmode. Note<br />
; that the numbering starts from 0, so this is actually<br />
; the second screen.<br />
[vidmode]<br />
screen&#61;0<br />
screen&#61;1<br />
}}<br />
<br />
== Troubleshooting ==<br />
<br />
=== redshift-gtk will not start ===<br />
<br />
redshift-gtk requires optional dependencies to work correctly. To verify any missing dependencies, run {{ic|redshift-gtk}} in the command line. Similar output to the following would be produced:<br />
<br />
Traceback (most recent call last):<br />
File "/usr/bin/redshift-gtk", line 26, in <module><br />
from redshift_gtk.statusicon import run<br />
File "/usr/lib/python3.4/site-packages/redshift_gtk/statusicon.py", line 31, in <module><br />
from gi.repository import Gtk, GLib<br />
ImportError: No module named 'gi.repository'<br />
<br />
If this is the case, installing {{Pkg|python-gobject}}, {{Pkg|python-xdg}}, and {{Pkg|librsvg}} from the [[Official repositories]] would solve this issue.<br />
<br />
== See also ==<br />
* [http://jonls.dk/redshift Redshift website]<br />
* [https://launchpad.net/redshift Redshift on launchpad]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Redshift&diff=312805Redshift2014-05-01T02:31:41Z<p>Demizer: Fix quoting in introduction.</p>
<hr />
<div>[[Category:X Server]]<br />
[[Category:Graphics]]<br />
[[Category:Eye candy]]<br />
[[Category:Audio/Video]]<br />
<br />
From the [http://jonls.dk/redshift/ redshift project web page]:<br />
<br />
:''Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night. This program is inspired by [http://justgetflux.com f.lux] [...].''<br />
<br />
The project is developed on [https://github.com/jonls/redshift GitHub].<br />
<br />
== Installation ==<br />
<br />
The {{Pkg|redshift}} package is available in the [[Official repositories]].<br />
<br />
{{Note|If Redshift will not start, see section [[#Troubleshooting]].}}<br />
<br />
=== Desktop environments ===<br />
<br />
For desktop environments, the {{ic|redshift-gtk}} command is installed with the {{Pkg|redshift}} package. redshift-gtk provides a system tray icon for controlling redshift. redshift-gtk requires an optional dependency {{Pkg|python-gobject}} available from the [[Official repositories]].<br />
<br />
To autostart redshift-gtk on startup, right click the system tray icon and select 'Autostart'.<br />
<br />
== Configuration ==<br />
<br />
Redshift will at least need your location to start, meaning the latitude and longitude of your location. Redshift employs several routines for obtaining your location. If none of them works (e.g. none of the used helper programs is installed), you need to enter your location manually: For most places/cities an easy way is to look up the wikipedia page of that place and get the location from there (search the page for "coordinates").<br />
<br />
=== Quick start ===<br />
<br />
To just get it up and running with a basic setup, issue:<br />
<br />
$ redshift -l LAT:LON<br />
<br />
where LAT is the latitude and LON is the longitude of your location.<br />
<br />
=== Automatic location based on GPS ===<br />
<br />
You can also use {{Pkg|gpsd}} to automatically determine your GPS location and use it as an input for Redshift. Create the following script and pass {{ic|$lat}} and {{ic|$lon}} to {{ic|redshift -l $lat;$lon}}:<br />
<br />
#!/bin/bash<br />
date<br />
#gpsdata=$( gpspipe -w -n 10 | grep -m 1 lon )<br />
gpsdata=$( gpspipe -w | grep -m 1 TPV )<br />
lat=$( echo "$gpsdata" | jsawk 'return this.lat' )<br />
lon=$( echo "$gpsdata" | jsawk 'return this.lon' )<br />
alt=$( echo "$gpsdata" | jsawk 'return this.alt' )<br />
dt=$( echo "$gpsdata" | jsawk 'return this.time' )<br />
echo "$dt"<br />
echo "You are here: $lat, $lon at $alt"<br />
<br />
For more information, see [https://bbs.archlinux.org/viewtopic.php?pid=1389735#p1389735 this] forums thread.<br />
<br />
=== Manual setup ===<br />
<br />
Redshift reads the configuration file {{ic|~/.config/redshift.conf}}, if it exists. However, Redshift does not create that configuration file, so you have to create it manually.<br />
Example for Hamburg/Germany:<br />
<br />
{{hc|~/.config/redshift.conf|<br />
; Global settings<br />
[redshift]<br />
temp-day&#61;5700<br />
temp-night&#61;3500<br />
transition&#61;1<br />
gamma&#61;0.8:0.7:0.8<br />
location-provider&#61;manual<br />
adjustment-method&#61;vidmode<br />
<br />
; The location provider and adjustment method settings<br />
; are in their own sections.<br />
[manual]<br />
; Hamburg<br />
lat&#61;53.3<br />
lon&#61;10.0<br />
<br />
; In this example screen 1 is adjusted by vidmode. Note<br />
; that the numbering starts from 0, so this is actually<br />
; the second screen.<br />
[vidmode]<br />
screen&#61;0<br />
screen&#61;1<br />
}}<br />
<br />
== Troubleshooting ==<br />
<br />
=== redshift-gtk will not start ===<br />
<br />
redshift-gtk requires optional dependencies to work correctly. To verify any missing dependencies, run {{ic|redshift-gtk}} in the command line. Similar output to the following would be produced:<br />
<br />
Traceback (most recent call last):<br />
File "/usr/bin/redshift-gtk", line 26, in <module><br />
from redshift_gtk.statusicon import run<br />
File "/usr/lib/python3.4/site-packages/redshift_gtk/statusicon.py", line 31, in <module><br />
from gi.repository import Gtk, GLib<br />
ImportError: No module named 'gi.repository'<br />
<br />
If this is the case, installing {{Pkg|python-gobject}}, {{Pkg|python-xdg}}, and {{Pkg|librsvg}} from the [[Official repositories]] would solve this issue.<br />
<br />
== See also ==<br />
* [http://jonls.dk/redshift Redshift website]<br />
* [https://launchpad.net/redshift Redshift on launchpad]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Redshift&diff=312803Redshift2014-05-01T02:28:11Z<p>Demizer: Rewrite missing dependencies section.</p>
<hr />
<div>[[Category:X Server]]<br />
[[Category:Graphics]]<br />
[[Category:Eye candy]]<br />
[[Category:Audio/Video]]<br />
The [http://jonls.dk/redshift/ website] states:<br />
<br />
"Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night. This program is inspired by [http://justgetflux.com f.lux] [...]."<br />
<br />
The project is developed on [https://github.com/jonls/redshift GitHub].<br />
<br />
== Installation ==<br />
<br />
The {{Pkg|redshift}} package is available in the [[Official repositories]].<br />
<br />
{{Note|If Redshift will not start, see section [[#Troubleshooting]].}}<br />
<br />
=== Desktop environments ===<br />
<br />
For desktop environments, the {{ic|redshift-gtk}} command is installed with the {{Pkg|redshift}} package. redshift-gtk provides a system tray icon for controlling redshift. redshift-gtk requires an optional dependency {{Pkg|python-gobject}} available from the [[Official repositories]].<br />
<br />
To autostart redshift-gtk on startup, right click the system tray icon and select 'Autostart'.<br />
<br />
== Configuration ==<br />
<br />
Redshift will at least need your location to start, meaning the latitude and longitude of your location. Redshift employs several routines for obtaining your location. If none of them works (e.g. none of the used helper programs is installed), you need to enter your location manually: For most places/cities an easy way is to look up the wikipedia page of that place and get the location from there (search the page for "coordinates").<br />
<br />
=== Quick start ===<br />
<br />
To just get it up and running with a basic setup, issue:<br />
<br />
$ redshift -l LAT:LON<br />
<br />
where LAT is the latitude and LON is the longitude of your location.<br />
<br />
=== Automatic location based on GPS ===<br />
<br />
You can also use {{Pkg|gpsd}} to automatically determine your GPS location and use it as an input for Redshift. Create the following script and pass {{ic|$lat}} and {{ic|$lon}} to {{ic|redshift -l $lat;$lon}}:<br />
<br />
#!/bin/bash<br />
date<br />
#gpsdata=$( gpspipe -w -n 10 | grep -m 1 lon )<br />
gpsdata=$( gpspipe -w | grep -m 1 TPV )<br />
lat=$( echo "$gpsdata" | jsawk 'return this.lat' )<br />
lon=$( echo "$gpsdata" | jsawk 'return this.lon' )<br />
alt=$( echo "$gpsdata" | jsawk 'return this.alt' )<br />
dt=$( echo "$gpsdata" | jsawk 'return this.time' )<br />
echo "$dt"<br />
echo "You are here: $lat, $lon at $alt"<br />
<br />
For more information, see [https://bbs.archlinux.org/viewtopic.php?pid=1389735#p1389735 this] forums thread.<br />
<br />
=== Manual setup ===<br />
<br />
Redshift reads the configuration file {{ic|~/.config/redshift.conf}}, if it exists. However, Redshift does not create that configuration file, so you have to create it manually.<br />
Example for Hamburg/Germany:<br />
<br />
{{hc|~/.config/redshift.conf|<br />
; Global settings<br />
[redshift]<br />
temp-day&#61;5700<br />
temp-night&#61;3500<br />
transition&#61;1<br />
gamma&#61;0.8:0.7:0.8<br />
location-provider&#61;manual<br />
adjustment-method&#61;vidmode<br />
<br />
; The location provider and adjustment method settings<br />
; are in their own sections.<br />
[manual]<br />
; Hamburg<br />
lat&#61;53.3<br />
lon&#61;10.0<br />
<br />
; In this example screen 1 is adjusted by vidmode. Note<br />
; that the numbering starts from 0, so this is actually<br />
; the second screen.<br />
[vidmode]<br />
screen&#61;0<br />
screen&#61;1<br />
}}<br />
<br />
== Troubleshooting ==<br />
<br />
=== redshift-gtk will not start ===<br />
<br />
redshift-gtk requires optional dependencies to work correctly. To verify any missing dependencies, run {{ic|redshift-gtk}} in the command line. Similar output to the following would be produced:<br />
<br />
Traceback (most recent call last):<br />
File "/usr/bin/redshift-gtk", line 26, in <module><br />
from redshift_gtk.statusicon import run<br />
File "/usr/lib/python3.4/site-packages/redshift_gtk/statusicon.py", line 31, in <module><br />
from gi.repository import Gtk, GLib<br />
ImportError: No module named 'gi.repository'<br />
<br />
If this is the case, installing {{Pkg|python-gobject}}, {{Pkg|python-xdg}}, and {{Pkg|librsvg}} from the [[Official repositories]] would solve this issue.<br />
<br />
== See also ==<br />
* [http://jonls.dk/redshift Redshift website]<br />
* [https://launchpad.net/redshift Redshift on launchpad]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Redshift&diff=312800Redshift2014-05-01T02:18:59Z<p>Demizer: Remove redshift-gtk info from manual setup section.</p>
<hr />
<div>[[Category:X Server]]<br />
[[Category:Graphics]]<br />
[[Category:Eye candy]]<br />
[[Category:Audio/Video]]<br />
The [http://jonls.dk/redshift/ website] states:<br />
<br />
"Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night. This program is inspired by [http://justgetflux.com f.lux] [...]."<br />
<br />
The project is developed on [https://github.com/jonls/redshift GitHub].<br />
<br />
== Installation ==<br />
<br />
The {{Pkg|redshift}} package is available in the [[Official repositories]].<br />
<br />
{{Note|If Redshift will not start, see section [[#Troubleshooting]].}}<br />
<br />
=== Desktop environments ===<br />
<br />
For desktop environments, the {{ic|redshift-gtk}} command is installed with the {{Pkg|redshift}} package. redshift-gtk provides a system tray icon for controlling redshift. redshift-gtk requires an optional dependency {{Pkg|python-gobject}} available from the [[Official repositories]].<br />
<br />
To autostart redshift-gtk on startup, right click the system tray icon and select 'Autostart'.<br />
<br />
== Configuration ==<br />
<br />
Redshift will at least need your location to start, meaning the latitude and longitude of your location. Redshift employs several routines for obtaining your location. If none of them works (e.g. none of the used helper programs is installed), you need to enter your location manually: For most places/cities an easy way is to look up the wikipedia page of that place and get the location from there (search the page for "coordinates").<br />
<br />
=== Quick start ===<br />
<br />
To just get it up and running with a basic setup, issue:<br />
<br />
$ redshift -l LAT:LON<br />
<br />
where LAT is the latitude and LON is the longitude of your location.<br />
<br />
=== Automatic location based on GPS ===<br />
<br />
You can also use {{Pkg|gpsd}} to automatically determine your GPS location and use it as an input for Redshift. Create the following script and pass {{ic|$lat}} and {{ic|$lon}} to {{ic|redshift -l $lat;$lon}}:<br />
<br />
#!/bin/bash<br />
date<br />
#gpsdata=$( gpspipe -w -n 10 | grep -m 1 lon )<br />
gpsdata=$( gpspipe -w | grep -m 1 TPV )<br />
lat=$( echo "$gpsdata" | jsawk 'return this.lat' )<br />
lon=$( echo "$gpsdata" | jsawk 'return this.lon' )<br />
alt=$( echo "$gpsdata" | jsawk 'return this.alt' )<br />
dt=$( echo "$gpsdata" | jsawk 'return this.time' )<br />
echo "$dt"<br />
echo "You are here: $lat, $lon at $alt"<br />
<br />
For more information, see [https://bbs.archlinux.org/viewtopic.php?pid=1389735#p1389735 this] forums thread.<br />
<br />
=== Manual setup ===<br />
<br />
Redshift reads the configuration file {{ic|~/.config/redshift.conf}}, if it exists. However, Redshift does not create that configuration file, so you have to create it manually.<br />
Example for Hamburg/Germany:<br />
<br />
{{hc|~/.config/redshift.conf|<br />
; Global settings<br />
[redshift]<br />
temp-day&#61;5700<br />
temp-night&#61;3500<br />
transition&#61;1<br />
gamma&#61;0.8:0.7:0.8<br />
location-provider&#61;manual<br />
adjustment-method&#61;vidmode<br />
<br />
; The location provider and adjustment method settings<br />
; are in their own sections.<br />
[manual]<br />
; Hamburg<br />
lat&#61;53.3<br />
lon&#61;10.0<br />
<br />
; In this example screen 1 is adjusted by vidmode. Note<br />
; that the numbering starts from 0, so this is actually<br />
; the second screen.<br />
[vidmode]<br />
screen&#61;0<br />
screen&#61;1<br />
}}<br />
<br />
== Troubleshooting ==<br />
<br />
=== Missing dependency ===<br />
<br />
{{Pkg|python2-xdg}}, {{Pkg|librsvg}} and {{Pkg|pygtk}} are needed for redshift-gtk. They are the optional dependencies for the redshift package. If you run into problems when trying to run redshift-gtk, check if they are installed. If they are not installed, install them as a dependency:<br />
# pacman --asdeps -S python2-xdg librsvg pygtk<br />
<br />
== See also ==<br />
* [http://jonls.dk/redshift Redshift website]<br />
* [https://launchpad.net/redshift Redshift on launchpad]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Redshift&diff=312799Redshift2014-05-01T02:18:00Z<p>Demizer: Add auto start information for desktop environments.</p>
<hr />
<div>[[Category:X Server]]<br />
[[Category:Graphics]]<br />
[[Category:Eye candy]]<br />
[[Category:Audio/Video]]<br />
The [http://jonls.dk/redshift/ website] states:<br />
<br />
"Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night. This program is inspired by [http://justgetflux.com f.lux] [...]."<br />
<br />
The project is developed on [https://github.com/jonls/redshift GitHub].<br />
<br />
== Installation ==<br />
<br />
The {{Pkg|redshift}} package is available in the [[Official repositories]].<br />
<br />
{{Note|If Redshift will not start, see section [[#Troubleshooting]].}}<br />
<br />
=== Desktop environments ===<br />
<br />
For desktop environments, the {{ic|redshift-gtk}} command is installed with the {{Pkg|redshift}} package. redshift-gtk provides a system tray icon for controlling redshift. redshift-gtk requires an optional dependency {{Pkg|python-gobject}} available from the [[Official repositories]].<br />
<br />
To autostart redshift-gtk on startup, right click the system tray icon and select 'Autostart'.<br />
<br />
== Configuration ==<br />
<br />
Redshift will at least need your location to start, meaning the latitude and longitude of your location. Redshift employs several routines for obtaining your location. If none of them works (e.g. none of the used helper programs is installed), you need to enter your location manually: For most places/cities an easy way is to look up the wikipedia page of that place and get the location from there (search the page for "coordinates").<br />
<br />
=== Quick start ===<br />
<br />
To just get it up and running with a basic setup, issue:<br />
<br />
$ redshift -l LAT:LON<br />
<br />
where LAT is the latitude and LON is the longitude of your location.<br />
<br />
=== Automatic location based on GPS ===<br />
<br />
You can also use {{Pkg|gpsd}} to automatically determine your GPS location and use it as an input for Redshift. Create the following script and pass {{ic|$lat}} and {{ic|$lon}} to {{ic|redshift -l $lat;$lon}}:<br />
<br />
#!/bin/bash<br />
date<br />
#gpsdata=$( gpspipe -w -n 10 | grep -m 1 lon )<br />
gpsdata=$( gpspipe -w | grep -m 1 TPV )<br />
lat=$( echo "$gpsdata" | jsawk 'return this.lat' )<br />
lon=$( echo "$gpsdata" | jsawk 'return this.lon' )<br />
alt=$( echo "$gpsdata" | jsawk 'return this.alt' )<br />
dt=$( echo "$gpsdata" | jsawk 'return this.time' )<br />
echo "$dt"<br />
echo "You are here: $lat, $lon at $alt"<br />
<br />
For more information, see [https://bbs.archlinux.org/viewtopic.php?pid=1389735#p1389735 this] forums thread.<br />
<br />
=== Manual setup ===<br />
<br />
Redshift reads the configuration file {{ic|~/.config/redshift.conf}}, if it exists. However, Redshift does not create that configuration file, so you have to create it manually.<br />
Example for Hamburg/Germany:<br />
<br />
{{hc|~/.config/redshift.conf|<br />
; Global settings<br />
[redshift]<br />
temp-day&#61;5700<br />
temp-night&#61;3500<br />
transition&#61;1<br />
gamma&#61;0.8:0.7:0.8<br />
location-provider&#61;manual<br />
adjustment-method&#61;vidmode<br />
<br />
; The location provider and adjustment method settings<br />
; are in their own sections.<br />
[manual]<br />
; Hamburg<br />
lat&#61;53.3<br />
lon&#61;10.0<br />
<br />
; In this example screen 1 is adjusted by vidmode. Note<br />
; that the numbering starts from 0, so this is actually<br />
; the second screen.<br />
[vidmode]<br />
screen&#61;0<br />
screen&#61;1<br />
}}<br />
<br />
After you created that file, start redshift from the menu of your DE (called "redshift-gtk") or type the following in your terminal:<br />
<br />
$ redshift-gtk &<br />
<br />
Using "redshift-gtk" instead of "redshift" launches Redshift with a system tray icon for easier handling of the application.<br />
Finally, if you want to start Redshift automatically on system startup, rightclick the system tray icon an check "Autostart".<br />
<br />
== Troubleshooting ==<br />
<br />
=== Missing dependency ===<br />
<br />
{{Pkg|python2-xdg}}, {{Pkg|librsvg}} and {{Pkg|pygtk}} are needed for redshift-gtk. They are the optional dependencies for the redshift package. If you run into problems when trying to run redshift-gtk, check if they are installed. If they are not installed, install them as a dependency:<br />
# pacman --asdeps -S python2-xdg librsvg pygtk<br />
<br />
== See also ==<br />
* [http://jonls.dk/redshift Redshift website]<br />
* [https://launchpad.net/redshift Redshift on launchpad]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Redshift&diff=312798Redshift2014-05-01T02:16:52Z<p>Demizer: Add warning about redshift-gtk command.</p>
<hr />
<div>[[Category:X Server]]<br />
[[Category:Graphics]]<br />
[[Category:Eye candy]]<br />
[[Category:Audio/Video]]<br />
The [http://jonls.dk/redshift/ website] states:<br />
<br />
"Redshift adjusts the color temperature of your screen according to your surroundings. This may help your eyes hurt less if you are working in front of the screen at night. This program is inspired by [http://justgetflux.com f.lux] [...]."<br />
<br />
The project is developed on [https://github.com/jonls/redshift GitHub].<br />
<br />
== Installation ==<br />
<br />
The {{Pkg|redshift}} package is available in the [[Official repositories]].<br />
<br />
{{Note|If Redshift will not start, see section [[#Troubleshooting]].}}<br />
<br />
=== Desktop environments ===<br />
<br />
For desktop environments, the {{ic|redshift-gtk}} command is installed with the {{Pkg|redshift}} package. redshift-gtk provides a system tray icon for controlling redshift. redshift-gtk requires an optional dependency {{Pkg|python-gobject}} available from the [[Official repositories]].<br />
<br />
== Configuration ==<br />
<br />
Redshift will at least need your location to start, meaning the latitude and longitude of your location. Redshift employs several routines for obtaining your location. If none of them works (e.g. none of the used helper programs is installed), you need to enter your location manually: For most places/cities an easy way is to look up the wikipedia page of that place and get the location from there (search the page for "coordinates").<br />
<br />
=== Quick start ===<br />
<br />
To just get it up and running with a basic setup, issue:<br />
<br />
$ redshift -l LAT:LON<br />
<br />
where LAT is the latitude and LON is the longitude of your location.<br />
<br />
=== Automatic location based on GPS ===<br />
<br />
You can also use {{Pkg|gpsd}} to automatically determine your GPS location and use it as an input for Redshift. Create the following script and pass {{ic|$lat}} and {{ic|$lon}} to {{ic|redshift -l $lat;$lon}}:<br />
<br />
#!/bin/bash<br />
date<br />
#gpsdata=$( gpspipe -w -n 10 | grep -m 1 lon )<br />
gpsdata=$( gpspipe -w | grep -m 1 TPV )<br />
lat=$( echo "$gpsdata" | jsawk 'return this.lat' )<br />
lon=$( echo "$gpsdata" | jsawk 'return this.lon' )<br />
alt=$( echo "$gpsdata" | jsawk 'return this.alt' )<br />
dt=$( echo "$gpsdata" | jsawk 'return this.time' )<br />
echo "$dt"<br />
echo "You are here: $lat, $lon at $alt"<br />
<br />
For more information, see [https://bbs.archlinux.org/viewtopic.php?pid=1389735#p1389735 this] forums thread.<br />
<br />
=== Manual setup ===<br />
<br />
Redshift reads the configuration file {{ic|~/.config/redshift.conf}}, if it exists. However, Redshift does not create that configuration file, so you have to create it manually.<br />
Example for Hamburg/Germany:<br />
<br />
{{hc|~/.config/redshift.conf|<br />
; Global settings<br />
[redshift]<br />
temp-day&#61;5700<br />
temp-night&#61;3500<br />
transition&#61;1<br />
gamma&#61;0.8:0.7:0.8<br />
location-provider&#61;manual<br />
adjustment-method&#61;vidmode<br />
<br />
; The location provider and adjustment method settings<br />
; are in their own sections.<br />
[manual]<br />
; Hamburg<br />
lat&#61;53.3<br />
lon&#61;10.0<br />
<br />
; In this example screen 1 is adjusted by vidmode. Note<br />
; that the numbering starts from 0, so this is actually<br />
; the second screen.<br />
[vidmode]<br />
screen&#61;0<br />
screen&#61;1<br />
}}<br />
<br />
After you created that file, start redshift from the menu of your DE (called "redshift-gtk") or type the following in your terminal:<br />
<br />
$ redshift-gtk &<br />
<br />
Using "redshift-gtk" instead of "redshift" launches Redshift with a system tray icon for easier handling of the application.<br />
Finally, if you want to start Redshift automatically on system startup, rightclick the system tray icon an check "Autostart".<br />
<br />
== Troubleshooting ==<br />
<br />
=== Missing dependency ===<br />
<br />
{{Pkg|python2-xdg}}, {{Pkg|librsvg}} and {{Pkg|pygtk}} are needed for redshift-gtk. They are the optional dependencies for the redshift package. If you run into problems when trying to run redshift-gtk, check if they are installed. If they are not installed, install them as a dependency:<br />
# pacman --asdeps -S python2-xdg librsvg pygtk<br />
<br />
== See also ==<br />
* [http://jonls.dk/redshift Redshift website]<br />
* [https://launchpad.net/redshift Redshift on launchpad]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Unofficial_user_repositories&diff=311515Unofficial user repositories2014-04-22T23:42:36Z<p>Demizer: /* demz-repo-archiso */</p>
<hr />
<div>[[Category:Package management]]<br />
{{Related articles start}}<br />
{{Related|pacman-key}}<br />
{{Related|Official repositories}}<br />
{{Related articles end}} <br />
Because the AUR only allows users to upload PKGBUILD and other package build related files, but does not provide a means for distributing a binary package, a user may want to create a binary repository of their packages elsewhere. See [[Pacman Tips#Custom local repository]] for more information.<br />
<br />
If you have your own repository, please add it to this page, so that all the other users will know where to find your packages. Please keep the following rules when adding new repositories:<br />
<br />
* Keep the lists in alphabetical order.<br />
* Include some information about the maintainer: include at least a (nick)name and some form of contact information (web site, email address, user page on ArchWiki or the forums, etc.).<br />
* If the repository is of the ''signed'' variety, please include a key-id, possibly using it as the anchor for a link to its keyserver; if the key is not on a keyserver, include a link to the key file.<br />
* Include some short description (e.g. the category of packages provided in the repository).<br />
* If there is a page (either on ArchWiki or external) containing more information about the repository, include a link to it.<br />
* If possible, avoid using comments in code blocks. The formatted description is much more readable. Users who want some comments in their {{ic|pacman.conf}} can easily create it on their own.<br />
<br />
{{Note|If you are looking to add a signed repository to your {{ic|pacman.conf}}, you must be familiar with [[Pacman-key#Adding unofficial keys]].}}<br />
<br />
{{Expansion|Please fill in the missing information about maintainers.}}<br />
<br />
== Any ==<br />
<br />
"Any" repositories are architecture-independent. In other words, they can be used on both i686 and x86_64 systems.<br />
<br />
=== Signed ===<br />
<br />
==== bioinformatics-any ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some python packages and genome browser for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics-any]<br />
Server = http://decryptedepsilon.bl.ee/repo/any<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-fonts ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle-fonts repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-fonts]<br />
Server = http://bohoomil.com/repo/fonts<br />
</nowiki>}}<br />
<br />
==== xyne-any ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for "any" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{Note|Use this repository only if there is no matching {{ic|[xyne-*]}} repository for your architecture.}}<br />
<br />
{{bc|<nowiki><br />
[xyne-any]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== archlinuxgr-any ====<br />
* '''Maintainer:'''<br />
* '''Description:''' The Hellenic (Greek) unofficial Arch Linux repository with many interesting packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-any]<br />
Server = http://archlinuxgr.tiven.org/archlinux/any<br />
</nowiki>}}<br />
<br />
== Both i686 and x86_64 ==<br />
<br />
Repositories with both i686 and x86_64 versions. The {{ic|$arch}} variable will be set automatically by pacman.<br />
<br />
=== Signed ===<br />
<br />
==== arcanisrepo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#arcanis arcanis]<br />
* '''Description:''' A repository with some AUR packages including packages from VCS<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[arcanisrepo]<br />
Server = ftp://repo.arcanis.name/repo/$arch<br />
</nowiki>}}<br />
<br />
==== bbqlinux ====<br />
<br />
* '''Maintainer:''' [https://plus.google.com/u/0/+DanielHillenbrand/about Daniel Hillenbrand]<br />
* '''Description:''' Packages for Android Development<br />
* '''Upstream Page:''' http://bbqlinux.org/<br />
* '''Key-ID:''' Get the bbqlinux-keyring package, as it contains the needed keys.<br />
<br />
{{bc|<nowiki><br />
[bbqlinux]<br />
Server = http://packages.bbqlinux.org/$arch<br />
</nowiki>}}<br />
==== carstene1ns ====<br />
<br />
* '''Maintainer:''' [[User:Carstene1ns|Carsten Teibes]]<br />
* '''Description:''' AUR packages maintained and/or used by Carsten Teibes (games/Wii/lib32/Python)<br />
* '''Upstream page:''' http://arch.carsten-teibes.de (still under construction)<br />
* '''Key-ID:''' 2476B20B<br />
<br />
{{bc|<nowiki><br />
[carstene1ns]<br />
Server = http://repo.carsten-teibes.de/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0 | Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst]<br />
Server = http://catalyst.wirephire.com/repo/catalyst/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://70.239.162.206/catalyst-mirror/repo/catalyst/$arch<br />
#Server = http://mirror.rts-informatique.fr/archlinux-catalyst/repo/catalyst/$arch<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst-hd234k ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0 | Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst-hd234k]<br />
Server = http://catalyst.wirephire.com/repo/catalyst-hd234k/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://70.239.162.206/catalyst-mirror/repo/catalyst-hd234k/$arch<br />
#Server = http://mirror.rts-informatique.fr/archlinux-catalyst/repo/catalyst-hd234k/$arch<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst-hd234k/$arch<br />
</nowiki>}}<br />
<br />
==== city ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bgyorgy Balló György]<br />
* '''Description:''' Experimental/unpopular packages.<br />
* '''Upstream page:''' http://pkgbuild.com/~bgyorgy/city.html<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[city]<br />
Server = http://pkgbuild.com/~bgyorgy/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== crypto ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Includes tomb, tomb-git, and other related software.<br />
<br />
{{bc|<nowiki><br />
[crypto]<br />
Server = http://tomb.dyne.org/arch_repo/$arch<br />
</nowiki>}}<br />
<br />
==== demz-repo-core ====<br />
<br />
* '''Maintainer:''' [http://demizerone.com Jesus Alvarez (demizer)]<br />
* '''Description:''' Packages for ZFS on Arch Linux.<br />
* '''Upstream page:''' http://demizerone.com/archzfs<br />
* '''Key-ID:''' 0EE7A126<br />
<br />
{{bc|<nowiki><br />
[demz-repo-core]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== demz-repo-archiso ====<br />
<br />
* '''Maintainer:''' [http://demizerone.com Jesus Alvarez (demizer)]<br />
* '''Description:''' Packages for installing ZFS from an Arch ISO live disk<br />
* '''Upstream page:''' http://demizerone.com/archzfs<br />
* '''Key-ID:''' 0EE7A126<br />
<br />
{{bc|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== infinality-bundle ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle main repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle]<br />
Server = http://bohoomil.com/repo/$arch<br />
</nowiki>}}<br />
<br />
==== metalgamer ====<br />
<br />
* '''Maintainer:''' [http://metalgamer.eu/ metalgamer]<br />
* '''Description:''' Packages I use and/or maintain on the AUR.<br />
* '''Key ID:''' F55313FB<br />
<br />
{{bc|<nowiki><br />
[metalgamer]<br />
Server = http://repo.metalgamer.eu/$arch<br />
</nowiki>}}<br />
<br />
==== pipelight ====<br />
<br />
* '''Maintainer:''' <br />
* '''Description:''' Pipelight and wine-compholio<br />
* '''Upstream page:''' [http://fds-team.de/ fds-team.de]<br />
* '''Key-ID:''' E49CC0415DC2D5CA<br />
* '''Keyfile:''' http://repos.fds-team.de/Release.key<br />
{{bc|<nowiki>[pipelight]<br />
Server = http://repos.fds-team.de/stable/arch/$arch</nowiki>}}<br />
<br />
==== repo-ck ====<br />
<br />
* '''Maintainer:''' [[User:Graysky|graysky]]<br />
* '''Description:''' Kernel and modules with Brain Fuck Scheduler and all the goodies in the ck1 patch set.<br />
* '''Upstream page:''' [http://repo-ck.com repo-ck.com]<br />
* '''Wiki:''' [[repo-ck]]<br />
* '''Key-ID:''' 5EE46C4C<br />
<br />
{{bc|<nowiki><br />
[repo-ck]<br />
Server = http://repo-ck.com/$arch<br />
</nowiki>}}<br />
<br />
==== sergej-repo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#spupykin Sergej Pupykin]<br />
* '''Description:''' psi-plus, owncloud-git, ziproxy, android, MySQL, and other stuff. Some packages also available for armv7h.<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[sergej-repo]<br />
Server = http://repo.p5n.pp.ru/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== alucryd ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing various packages Maxime Gauduin maintains (or not) in the AUR.<br />
<br />
{{bc|<nowiki><br />
[alucryd]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archaudio ====<br />
<br />
* '''Maintainer:''' [[User:Schivmeister|Ray Rashif]], [https://aur.archlinux.org/account/jhernberg Joakim Hernberg]<br />
* '''Description:''' Pro-audio packages<br />
<br />
{{bc|<nowiki><br />
[archaudio-production]<br />
Server = http://repos.archaudio.org/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archie-repo ====<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/Kalinda/ Kalinda]<br />
* '''Description:''' Repo for wine-silverlight, pipelight, and some misc packages.<br />
<br />
{{bc|<nowiki><br />
[archie-repo]<br />
Server = http://andontie.net/archie-repo/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxcn ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' The Chinese Arch Linux communities packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxcn]<br />
Server = http://repo.archlinuxcn.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxfr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
* '''Upstream page:''' http://afur.archlinux.fr<br />
<br />
{{bc|<nowiki><br />
[archlinuxfr]<br />
Server = http://repo.archlinux.fr/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgis ====<br />
{{Note|Off-line since 2014-03-29.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Maintainers needed - low bandwidth<br />
<br />
{{bc|<nowiki><br />
[archlinuxgis]<br />
Server = http://archlinuxgis.no-ip.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr]<br />
Server = http://archlinuxgr.tiven.org/archlinux/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr-kde4 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' KDE4 packages (plasmoids, themes etc) provided by the Hellenic (Greek) Arch Linux community<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-kde4]<br />
Server = http://archlinuxgr.tiven.org/archlinux-kde4/$arch<br />
</nowiki>}}<br />
<br />
==== archstuff ====<br />
{{Note|Off-line since 2014-01-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' AUR's most voted and many bin32-* and lib32-* packages.<br />
<br />
{{bc|<nowiki><br />
[archstuff]<br />
Server = http://archstuff.vs169092.vserver.de/$arch<br />
</nowiki>}}<br />
<br />
==== arsch ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' From users of orgizm.net<br />
<br />
{{bc|<nowiki><br />
[arsch]<br />
Server = http://arsch.orgizm.net/$arch<br />
</nowiki>}}<br />
<br />
==== aurbin ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Automated build of AUR packages<br />
<br />
{{bc|<nowiki><br />
[aurbin]<br />
Server = http://aurbin.net/$arch<br />
</nowiki>}}<br />
<br />
==== cinnamon ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable and actively developed Cinnamon packages (Applets, Themes, Extensions), plus others (Hotot, qBitTorrent, GTK themes, Perl modules, and more).<br />
<br />
{{bc|<nowiki><br />
[cinnamon]<br />
Server = http://archlinux.zoelife4u.org/cinnamon/$arch<br />
</nowiki>}}<br />
<br />
==== ede ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Equinox Desktop Environment repository<br />
<br />
{{bc|<nowiki><br />
[ede]<br />
Server = http://www.equinox-project.org/repos/arch/$arch<br />
</nowiki>}}<br />
<br />
==== haskell-core ====<br />
<br />
* '''Maintainer:''' Magnus Therning<br />
* '''Description:''' Arch-Haskell repository<br />
* '''Upstream page:''' https://github.com/archhaskell/habs<br />
<br />
{{bc|<nowiki><br />
[haskell-core]<br />
Server = http://xsounds.org/~haskell/core/$arch<br />
</nowiki>}}<br />
<br />
==== heftig ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#heftig Jan Steffens]<br />
* '''Description:''' Includes linux-zen and aurora (Firefox development build - works alongside {{Pkg|firefox}} in the ''extra'' repository).<br />
* '''Upstream page:''' https://bbs.archlinux.org/viewtopic.php?id=117157<br />
<br />
{{bc|<nowiki><br />
[heftig]<br />
Server = http://pkgbuild.com/~heftig/repo/$arch<br />
</nowiki>}}<br />
<br />
==== herecura-stable ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' additional packages not found in the ''community'' repository<br />
<br />
{{bc|<nowiki><br />
[herecura-stable]<br />
Server = http://repo.herecura.be/herecura-stable/$arch<br />
</nowiki>}}<br />
<br />
==== herecura-testing ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' additional packages for testing build against stable arch<br />
<br />
{{bc|<nowiki><br />
[herecura-testing]<br />
Server = http://repo.herecura.be/herecura-testing/$arch<br />
</nowiki>}}<br />
<br />
==== mesa-git ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Mesa git builds for the ''testing'' and ''multilib-testing'' repositories<br />
<br />
{{bc|<nowiki><br />
[mesa-git]<br />
Server = http://pkgbuild.com/~lcarlier/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== oracle ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Oracle database client<br />
<br />
{{Warning|By adding this you are agreeing to the Oracle license at http://www.oracle.com/technetwork/licenses/instant-client-lic-152016.html}}<br />
<br />
{{bc|<nowiki><br />
[oracle]<br />
Server = http://linux.shikadi.net/arch/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== pantheon ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing Pantheon-related packages<br />
<br />
{{bc|<nowiki><br />
[pantheon]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== paulburton-fitbitd ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains fitbitd for synchronizing FitBit trackers<br />
<br />
{{bc|<nowiki><br />
[paulburton-fitbitd]<br />
Server = http://www.paulburton.eu/arch/fitbitd/$arch<br />
</nowiki>}}<br />
<br />
==== pfkernel ====<br />
<br />
* '''Maintainer:''' [[User:Nous|nous]]<br />
* '''Description:''' Generic and optimized binaries of the ARCH kernel patched with BFS, TuxOnIce, BFQ, Aufs3, linux-pf, kernel26-pf, gdm-old, nvidia-pf, nvidia-96xx, xchat-greek, arora-git<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/11734958/x86_64/index.html or start at http://tiny.cc/linux-pf<br />
<br />
{{bc|<nowiki><br />
[pfkernel]<br />
Server = http://dl.dropbox.com/u/11734958/$arch<br />
</nowiki>}}<br />
<br />
==== suckless ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' suckless.org packages<br />
<br />
{{bc|<nowiki><br />
[suckless]<br />
Server = http://dl.suckless.org/arch/$arch<br />
</nowiki>}}<br />
<br />
==== unity ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity]<br />
Server = http://unity.xe-xe.org/$arch<br />
</nowiki>}}<br />
<br />
==== unity-extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity-extra]<br />
Server = http://unity.xe-xe.org/extra/$arch<br />
</nowiki>}}<br />
<br />
==== unity ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity]<br />
Server = http://unity.humbug.in/$arch<br />
</nowiki>}}<br />
<br />
==== unity-extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity-extra]<br />
Server = http://unity.humbug.in/extra/$arch<br />
</nowiki>}}<br />
<br />
==== home_tarakbumba_archlinux_Arch_Extra_standard ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains a few pre-built AUR packages (zemberek, firefox-kde-opensuse, etc.)<br />
<br />
{{bc|<nowiki><br />
[home_tarakbumba_archlinux_Arch_Extra_standard]<br />
Server = http://download.opensuse.org/repositories/home:/tarakbumba:/archlinux/Arch_Extra_standard/$arch<br />
</nowiki>}}<br />
<br />
== i686 only ==<br />
<br />
=== Signed ===<br />
<br />
==== eee-ck ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Kernel and modules optimized for Asus Eee PC 701, with -ck patchset.<br />
<br />
{{bc|<nowiki><br />
[eee-ck]<br />
Server = http://zembla.shatteredsymmetry.com/repo<br />
</nowiki>}}<br />
<br />
==== xyne-i686 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "i686" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-i686]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' each program I'm using on x86_64 is compiled for i686 too<br />
* '''Upstream page:''' http://andrwe.org/linux/repository<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/i686<br />
</nowiki>}}<br />
<br />
==== batchbin ====<br />
{{Expansion|Who is the maintainer?}}<br />
{{Note|Offline since 2014-02-15.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' My personal projects and utilities which I feel can benefit others.<br />
<br />
{{bc|<nowiki><br />
[batchbin]<br />
Server = http://batchbin.ueuo.com/archlinux<br />
</nowiki>}}<br />
<br />
==== esclinux ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Mostly games, interactive fiction, and abc notation stuff already on the AUR.<br />
<br />
{{bc|<nowiki><br />
[esclinux]<br />
Server = http://download.tuxfamily.org/esclinuxcd/ressources/repo/i686/<br />
</nowiki>}}<br />
<br />
==== kpiche ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable OpenSync packages.<br />
<br />
{{bc|<nowiki><br />
[kpiche]<br />
Server = http://kpiche.archlinux.ca/repo<br />
</nowiki>}}<br />
<br />
==== kernel26-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 2.6.39<br />
<br />
{{bc|<nowiki><br />
[kernel26-pae]<br />
Server = http://kernel26-pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== linux-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 3.0<br />
<br />
{{bc|<nowiki><br />
[linux-pae]<br />
Server = http://pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== rfad ====<br />
<br />
* '''Maintainer:''' requiem [at] archlinux.us <br />
* '''Description:''' Repository made by haxit<br />
<br />
{{bc|<nowiki><br />
[rfad]<br />
Server = http://web.ncf.ca/ey723/archlinux/repo/<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/i686<br />
</nowiki>}}<br />
<br />
== x86_64 only ==<br />
<br />
=== Signed ===<br />
<br />
==== apathism ====<br />
<br />
* '''Maintainer:''' Koryabkin Ivan ([https://aur.archlinux.org/account/apathism/ apathism])<br />
* '''Upstream page:''' https://apathism.net/<br />
* '''Description:''' AUR packages that would take long to build, such as {{AUR|firefox-kde-opensuse}}.<br />
* '''Key-ID:''' 3E37398D<br />
* '''Keyfile:''' http://apathism.net/archlinux/apathism.key<br />
<br />
{{bc|<nowiki><br />
[apathism]<br />
Server = http://apathism.net/archlinux/<br />
</nowiki>}}<br />
<br />
==== bioinformatics ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some software tools for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics]<br />
Server = http://decryptedepsilon.bl.ee/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== freifunk-rheinland ====<br />
<br />
* '''Maintainer:''' nomaster<br />
* '''Description:''' Packages for the Freifunk project: batman-adv, batctl, fastd and dependencies.<br />
<br />
{{bc|<nowiki><br />
[freifunk-rheinland]<br />
Server = http://mirror.fluxent.de/archlinux-custom/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== heimdal ====<br />
{{Note|Offline since 2014-03-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Packages are compiled against Heimdal instead of MIT KRB5. Meant to be dropped before {{ic|[core]}} in {{ic|pacman.conf}}. All packages are signed.<br />
* '''Upstream page:''' https://github.com/Kiwilight/Heimdal-Pkgbuilds<br />
{{Warning|Be careful. Do not use this unless you know what you are doing because many of these packages override packages from the ''core'' and ''extra'' repositories}}<br />
<br />
{{bc|<nowiki><br />
[heimdal]<br />
Server = http://www.kiwilight.com/heimdal/$arch/<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-multilib ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle multilib repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-multilib]<br />
Server = http://bohoomil.com/repo/multilib/$arch<br />
</nowiki>}}<br />
<br />
==== siosm-aur ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages also available in the Arch User Repository, sometimes with minor fixes<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-aur]<br />
Server = http://repo.siosm.fr/$repo/<br />
</nowiki>}}<br />
<br />
==== siosm-selinux ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages required for SELinux support – work in progress (notably, missing an Arch Linux-compatible SELinux policy). See the [[SELinux]] page for details.<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-selinux]<br />
Server = http://repo.siosm.fr/$repo/<br />
</nowiki>}}<br />
<br />
==== subtitlecomposer ====<br />
<br />
* '''Maintainer:''' Mladen Milinkovic (maxrd2)<br />
* '''Description:''' Subtitle Composer stable and nightly builds<br />
* '''Upstream page:''' https://github.com/maxrd2/subtitlecomposer<br />
* '''Key-ID:''' EA8CEBEE<br />
<br />
{{bc|<nowiki><br />
[subtitlecomposer]<br />
Server = http://smoothware.net/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== xyne-x86_64 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "x86_64" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-x86_64]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' contains programs I'm using on many systems<br />
* '''Upstream page:''' http://andrwe.dyndns.org/doku.php/blog/repository {{Dead link|2013|11|30}}<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/x86_64<br />
</nowiki>}}<br />
<br />
==== archstudio ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Audio and Music Packages optimized for Intel Core i3, i5, and i7.<br />
* '''Upstream page:''' http://www.xsounds.org/~archstudio<br />
<br />
{{bc|<nowiki><br />
[archstudio]<br />
Server = http://www.xsounds.org/~archstudio/x86_64<br />
</nowiki>}}<br />
<br />
==== brtln ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bpiotrowski Bartłomiej Piotrowski]<br />
* '''Description:''' Alpha releases of MariaDB, Wine with win32 support only, and some VCS packages.<br />
<br />
{{bc|<nowiki><br />
[brtln]<br />
Server = http://pkgbuild.com/~barthalion/brtln/$arch/<br />
</nowiki>}}<br />
<br />
==== hawaii ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' hawaii Qt5/Wayland-based desktop environment<br />
* '''Upstream page:''' http://www.maui-project.org/<br />
<br />
{{bc|<nowiki><br />
[hawaii]<br />
Server = http://archive.maui-project.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== pnsft-pur ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Japanese input method packages Mozc (vanilla) and libkkc<br />
<br />
{{bc|<nowiki><br />
[pnsft-pur]<br />
Server = http://downloads.sourceforge.net/project/pnsft-aur/pur/x86_64<br />
</nowiki>}}<br />
<br />
==== mingw-w64 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Almost all mingw-w64 packages in the AUR updated every 8 hours.<br />
* '''Upstream page:''' http://arch.linuxx.org<br />
<br />
{{bc|<nowiki><br />
[mingw-w64]<br />
Server = http://downloads.sourceforge.net/project/mingw-w64-archlinux/$arch<br />
Server = http://arch.linuxx.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== rightscale ====<br />
<br />
* '''Maintainer:''' Chris Fordham <chris@fordham-nagy.id.au><br />
* '''Description:''' Packages for RightScale including the RightLink cloud instance agent. Install the package, rightscale-agent.<br />
<br />
{{bc|<nowiki><br />
[rightscale]<br />
Server = https://s3-us-west-1.amazonaws.com/archlinux-rightscale/$arch<br />
</nowiki>}}<br />
<br />
==== seiichiro ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' VDR and some plugins, mms, foo2zjs-drivers<br />
<br />
{{bc|<nowiki><br />
[seiichiro]<br />
Server = http://repo.seiichiro0185.org/x86_64<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== zen ====<br />
{{Note|Offline since 2014-03-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Various and zengeist AUR packages.<br />
<br />
{{bc|<nowiki><br />
[zen]<br />
Server = http://zloduch.cz/archlinux/x86_64<br />
</nowiki>}}<br />
<br />
==== miusystem ====<br />
<br />
* '''Maintainer:''' Theodore Keloglou <thodoris-12@hotmail.com><br />
* '''Description:''' Packages that I use and might interest others<br />
<br />
{{bc|<nowiki><br />
[miusystem]<br />
Server = https://miusystem.com/archlinux-repo<br />
</nowiki>}}<br />
<br />
== armv6h only ==<br />
<br />
=== Unsigned ===<br />
<br />
==== arch-fook-armv6h ====<br />
<br />
* '''Maintainer:''' Jaska Kivelä <jaska@kivela.net><br />
* '''Description:''' Stuff that I have compiled for my Raspberry PI. Including Enlightenment and home automation stuff.<br />
<br />
{{bc|<nowiki><br />
[arch-fook-armv6h]<br />
Server = http://kivela.net/jaska/arch-fook-armv6h<br />
</nowiki>}}</div>Demizerhttps://wiki.archlinux.org/index.php?title=Unofficial_user_repositories&diff=311514Unofficial user repositories2014-04-22T23:42:02Z<p>Demizer: /* demz-repo-archiso */</p>
<hr />
<div>[[Category:Package management]]<br />
{{Related articles start}}<br />
{{Related|pacman-key}}<br />
{{Related|Official repositories}}<br />
{{Related articles end}} <br />
Because the AUR only allows users to upload PKGBUILD and other package build related files, but does not provide a means for distributing a binary package, a user may want to create a binary repository of their packages elsewhere. See [[Pacman Tips#Custom local repository]] for more information.<br />
<br />
If you have your own repository, please add it to this page, so that all the other users will know where to find your packages. Please keep the following rules when adding new repositories:<br />
<br />
* Keep the lists in alphabetical order.<br />
* Include some information about the maintainer: include at least a (nick)name and some form of contact information (web site, email address, user page on ArchWiki or the forums, etc.).<br />
* If the repository is of the ''signed'' variety, please include a key-id, possibly using it as the anchor for a link to its keyserver; if the key is not on a keyserver, include a link to the key file.<br />
* Include some short description (e.g. the category of packages provided in the repository).<br />
* If there is a page (either on ArchWiki or external) containing more information about the repository, include a link to it.<br />
* If possible, avoid using comments in code blocks. The formatted description is much more readable. Users who want some comments in their {{ic|pacman.conf}} can easily create it on their own.<br />
<br />
{{Note|If you are looking to add a signed repository to your {{ic|pacman.conf}}, you must be familiar with [[Pacman-key#Adding unofficial keys]].}}<br />
<br />
{{Expansion|Please fill in the missing information about maintainers.}}<br />
<br />
== Any ==<br />
<br />
"Any" repositories are architecture-independent. In other words, they can be used on both i686 and x86_64 systems.<br />
<br />
=== Signed ===<br />
<br />
==== bioinformatics-any ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some python packages and genome browser for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics-any]<br />
Server = http://decryptedepsilon.bl.ee/repo/any<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-fonts ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle-fonts repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-fonts]<br />
Server = http://bohoomil.com/repo/fonts<br />
</nowiki>}}<br />
<br />
==== xyne-any ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for "any" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{Note|Use this repository only if there is no matching {{ic|[xyne-*]}} repository for your architecture.}}<br />
<br />
{{bc|<nowiki><br />
[xyne-any]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== archlinuxgr-any ====<br />
* '''Maintainer:'''<br />
* '''Description:''' The Hellenic (Greek) unofficial Arch Linux repository with many interesting packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-any]<br />
Server = http://archlinuxgr.tiven.org/archlinux/any<br />
</nowiki>}}<br />
<br />
== Both i686 and x86_64 ==<br />
<br />
Repositories with both i686 and x86_64 versions. The {{ic|$arch}} variable will be set automatically by pacman.<br />
<br />
=== Signed ===<br />
<br />
==== arcanisrepo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#arcanis arcanis]<br />
* '''Description:''' A repository with some AUR packages including packages from VCS<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[arcanisrepo]<br />
Server = ftp://repo.arcanis.name/repo/$arch<br />
</nowiki>}}<br />
<br />
==== bbqlinux ====<br />
<br />
* '''Maintainer:''' [https://plus.google.com/u/0/+DanielHillenbrand/about Daniel Hillenbrand]<br />
* '''Description:''' Packages for Android Development<br />
* '''Upstream Page:''' http://bbqlinux.org/<br />
* '''Key-ID:''' Get the bbqlinux-keyring package, as it contains the needed keys.<br />
<br />
{{bc|<nowiki><br />
[bbqlinux]<br />
Server = http://packages.bbqlinux.org/$arch<br />
</nowiki>}}<br />
==== carstene1ns ====<br />
<br />
* '''Maintainer:''' [[User:Carstene1ns|Carsten Teibes]]<br />
* '''Description:''' AUR packages maintained and/or used by Carsten Teibes (games/Wii/lib32/Python)<br />
* '''Upstream page:''' http://arch.carsten-teibes.de (still under construction)<br />
* '''Key-ID:''' 2476B20B<br />
<br />
{{bc|<nowiki><br />
[carstene1ns]<br />
Server = http://repo.carsten-teibes.de/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0 | Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst]<br />
Server = http://catalyst.wirephire.com/repo/catalyst/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://70.239.162.206/catalyst-mirror/repo/catalyst/$arch<br />
#Server = http://mirror.rts-informatique.fr/archlinux-catalyst/repo/catalyst/$arch<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst-hd234k ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0 | Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst-hd234k]<br />
Server = http://catalyst.wirephire.com/repo/catalyst-hd234k/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://70.239.162.206/catalyst-mirror/repo/catalyst-hd234k/$arch<br />
#Server = http://mirror.rts-informatique.fr/archlinux-catalyst/repo/catalyst-hd234k/$arch<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst-hd234k/$arch<br />
</nowiki>}}<br />
<br />
==== city ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bgyorgy Balló György]<br />
* '''Description:''' Experimental/unpopular packages.<br />
* '''Upstream page:''' http://pkgbuild.com/~bgyorgy/city.html<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[city]<br />
Server = http://pkgbuild.com/~bgyorgy/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== crypto ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Includes tomb, tomb-git, and other related software.<br />
<br />
{{bc|<nowiki><br />
[crypto]<br />
Server = http://tomb.dyne.org/arch_repo/$arch<br />
</nowiki>}}<br />
<br />
==== demz-repo-core ====<br />
<br />
* '''Maintainer:''' [http://demizerone.com Jesus Alvarez (demizer)]<br />
* '''Description:''' Packages for ZFS on Arch Linux.<br />
* '''Upstream page:''' http://demizerone.com/archzfs<br />
* '''Key-ID:''' 0EE7A126<br />
<br />
{{bc|<nowiki><br />
[demz-repo-core]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== demz-repo-archiso ====<br />
<br />
* '''Maintainer:''' Jesus Alvarez (demizer)<br />
* '''Description:''' Packages for installing ZFS from an Arch ISO live disk<br />
* '''Upstream page:''' http://demizerone.com/archzfs<br />
* '''Key-ID:''' 0EE7A126<br />
<br />
{{bc|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== infinality-bundle ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle main repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle]<br />
Server = http://bohoomil.com/repo/$arch<br />
</nowiki>}}<br />
<br />
==== metalgamer ====<br />
<br />
* '''Maintainer:''' [http://metalgamer.eu/ metalgamer]<br />
* '''Description:''' Packages I use and/or maintain on the AUR.<br />
* '''Key ID:''' F55313FB<br />
<br />
{{bc|<nowiki><br />
[metalgamer]<br />
Server = http://repo.metalgamer.eu/$arch<br />
</nowiki>}}<br />
<br />
==== pipelight ====<br />
<br />
* '''Maintainer:''' <br />
* '''Description:''' Pipelight and wine-compholio<br />
* '''Upstream page:''' [http://fds-team.de/ fds-team.de]<br />
* '''Key-ID:''' E49CC0415DC2D5CA<br />
* '''Keyfile:''' http://repos.fds-team.de/Release.key<br />
{{bc|<nowiki>[pipelight]<br />
Server = http://repos.fds-team.de/stable/arch/$arch</nowiki>}}<br />
<br />
==== repo-ck ====<br />
<br />
* '''Maintainer:''' [[User:Graysky|graysky]]<br />
* '''Description:''' Kernel and modules with Brain Fuck Scheduler and all the goodies in the ck1 patch set.<br />
* '''Upstream page:''' [http://repo-ck.com repo-ck.com]<br />
* '''Wiki:''' [[repo-ck]]<br />
* '''Key-ID:''' 5EE46C4C<br />
<br />
{{bc|<nowiki><br />
[repo-ck]<br />
Server = http://repo-ck.com/$arch<br />
</nowiki>}}<br />
<br />
==== sergej-repo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#spupykin Sergej Pupykin]<br />
* '''Description:''' psi-plus, owncloud-git, ziproxy, android, MySQL, and other stuff. Some packages also available for armv7h.<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[sergej-repo]<br />
Server = http://repo.p5n.pp.ru/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== alucryd ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing various packages Maxime Gauduin maintains (or not) in the AUR.<br />
<br />
{{bc|<nowiki><br />
[alucryd]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archaudio ====<br />
<br />
* '''Maintainer:''' [[User:Schivmeister|Ray Rashif]], [https://aur.archlinux.org/account/jhernberg Joakim Hernberg]<br />
* '''Description:''' Pro-audio packages<br />
<br />
{{bc|<nowiki><br />
[archaudio-production]<br />
Server = http://repos.archaudio.org/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archie-repo ====<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/Kalinda/ Kalinda]<br />
* '''Description:''' Repo for wine-silverlight, pipelight, and some misc packages.<br />
<br />
{{bc|<nowiki><br />
[archie-repo]<br />
Server = http://andontie.net/archie-repo/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxcn ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' The Chinese Arch Linux communities packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxcn]<br />
Server = http://repo.archlinuxcn.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxfr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
* '''Upstream page:''' http://afur.archlinux.fr<br />
<br />
{{bc|<nowiki><br />
[archlinuxfr]<br />
Server = http://repo.archlinux.fr/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgis ====<br />
{{Note|Off-line since 2014-03-29.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Maintainers needed - low bandwidth<br />
<br />
{{bc|<nowiki><br />
[archlinuxgis]<br />
Server = http://archlinuxgis.no-ip.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr]<br />
Server = http://archlinuxgr.tiven.org/archlinux/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr-kde4 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' KDE4 packages (plasmoids, themes etc) provided by the Hellenic (Greek) Arch Linux community<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-kde4]<br />
Server = http://archlinuxgr.tiven.org/archlinux-kde4/$arch<br />
</nowiki>}}<br />
<br />
==== archstuff ====<br />
{{Note|Off-line since 2014-01-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' AUR's most voted and many bin32-* and lib32-* packages.<br />
<br />
{{bc|<nowiki><br />
[archstuff]<br />
Server = http://archstuff.vs169092.vserver.de/$arch<br />
</nowiki>}}<br />
<br />
==== arsch ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' From users of orgizm.net<br />
<br />
{{bc|<nowiki><br />
[arsch]<br />
Server = http://arsch.orgizm.net/$arch<br />
</nowiki>}}<br />
<br />
==== aurbin ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Automated build of AUR packages<br />
<br />
{{bc|<nowiki><br />
[aurbin]<br />
Server = http://aurbin.net/$arch<br />
</nowiki>}}<br />
<br />
==== cinnamon ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable and actively developed Cinnamon packages (Applets, Themes, Extensions), plus others (Hotot, qBitTorrent, GTK themes, Perl modules, and more).<br />
<br />
{{bc|<nowiki><br />
[cinnamon]<br />
Server = http://archlinux.zoelife4u.org/cinnamon/$arch<br />
</nowiki>}}<br />
<br />
==== ede ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Equinox Desktop Environment repository<br />
<br />
{{bc|<nowiki><br />
[ede]<br />
Server = http://www.equinox-project.org/repos/arch/$arch<br />
</nowiki>}}<br />
<br />
==== haskell-core ====<br />
<br />
* '''Maintainer:''' Magnus Therning<br />
* '''Description:''' Arch-Haskell repository<br />
* '''Upstream page:''' https://github.com/archhaskell/habs<br />
<br />
{{bc|<nowiki><br />
[haskell-core]<br />
Server = http://xsounds.org/~haskell/core/$arch<br />
</nowiki>}}<br />
<br />
==== heftig ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#heftig Jan Steffens]<br />
* '''Description:''' Includes linux-zen and aurora (Firefox development build - works alongside {{Pkg|firefox}} in the ''extra'' repository).<br />
* '''Upstream page:''' https://bbs.archlinux.org/viewtopic.php?id=117157<br />
<br />
{{bc|<nowiki><br />
[heftig]<br />
Server = http://pkgbuild.com/~heftig/repo/$arch<br />
</nowiki>}}<br />
<br />
==== herecura-stable ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' additional packages not found in the ''community'' repository<br />
<br />
{{bc|<nowiki><br />
[herecura-stable]<br />
Server = http://repo.herecura.be/herecura-stable/$arch<br />
</nowiki>}}<br />
<br />
==== herecura-testing ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' additional packages for testing build against stable arch<br />
<br />
{{bc|<nowiki><br />
[herecura-testing]<br />
Server = http://repo.herecura.be/herecura-testing/$arch<br />
</nowiki>}}<br />
<br />
==== mesa-git ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Mesa git builds for the ''testing'' and ''multilib-testing'' repositories<br />
<br />
{{bc|<nowiki><br />
[mesa-git]<br />
Server = http://pkgbuild.com/~lcarlier/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== oracle ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Oracle database client<br />
<br />
{{Warning|By adding this you are agreeing to the Oracle license at http://www.oracle.com/technetwork/licenses/instant-client-lic-152016.html}}<br />
<br />
{{bc|<nowiki><br />
[oracle]<br />
Server = http://linux.shikadi.net/arch/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== pantheon ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing Pantheon-related packages<br />
<br />
{{bc|<nowiki><br />
[pantheon]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== paulburton-fitbitd ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains fitbitd for synchronizing FitBit trackers<br />
<br />
{{bc|<nowiki><br />
[paulburton-fitbitd]<br />
Server = http://www.paulburton.eu/arch/fitbitd/$arch<br />
</nowiki>}}<br />
<br />
==== pfkernel ====<br />
<br />
* '''Maintainer:''' [[User:Nous|nous]]<br />
* '''Description:''' Generic and optimized binaries of the ARCH kernel patched with BFS, TuxOnIce, BFQ, Aufs3, linux-pf, kernel26-pf, gdm-old, nvidia-pf, nvidia-96xx, xchat-greek, arora-git<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/11734958/x86_64/index.html or start at http://tiny.cc/linux-pf<br />
<br />
{{bc|<nowiki><br />
[pfkernel]<br />
Server = http://dl.dropbox.com/u/11734958/$arch<br />
</nowiki>}}<br />
<br />
==== suckless ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' suckless.org packages<br />
<br />
{{bc|<nowiki><br />
[suckless]<br />
Server = http://dl.suckless.org/arch/$arch<br />
</nowiki>}}<br />
<br />
==== unity ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity]<br />
Server = http://unity.xe-xe.org/$arch<br />
</nowiki>}}<br />
<br />
==== unity-extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity-extra]<br />
Server = http://unity.xe-xe.org/extra/$arch<br />
</nowiki>}}<br />
<br />
==== unity ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity]<br />
Server = http://unity.humbug.in/$arch<br />
</nowiki>}}<br />
<br />
==== unity-extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity-extra]<br />
Server = http://unity.humbug.in/extra/$arch<br />
</nowiki>}}<br />
<br />
==== home_tarakbumba_archlinux_Arch_Extra_standard ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains a few pre-built AUR packages (zemberek, firefox-kde-opensuse, etc.)<br />
<br />
{{bc|<nowiki><br />
[home_tarakbumba_archlinux_Arch_Extra_standard]<br />
Server = http://download.opensuse.org/repositories/home:/tarakbumba:/archlinux/Arch_Extra_standard/$arch<br />
</nowiki>}}<br />
<br />
== i686 only ==<br />
<br />
=== Signed ===<br />
<br />
==== eee-ck ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Kernel and modules optimized for Asus Eee PC 701, with -ck patchset.<br />
<br />
{{bc|<nowiki><br />
[eee-ck]<br />
Server = http://zembla.shatteredsymmetry.com/repo<br />
</nowiki>}}<br />
<br />
==== xyne-i686 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "i686" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-i686]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' each program I'm using on x86_64 is compiled for i686 too<br />
* '''Upstream page:''' http://andrwe.org/linux/repository<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/i686<br />
</nowiki>}}<br />
<br />
==== batchbin ====<br />
{{Expansion|Who is the maintainer?}}<br />
{{Note|Offline since 2014-02-15.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' My personal projects and utilities which I feel can benefit others.<br />
<br />
{{bc|<nowiki><br />
[batchbin]<br />
Server = http://batchbin.ueuo.com/archlinux<br />
</nowiki>}}<br />
<br />
==== esclinux ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Mostly games, interactive fiction, and abc notation stuff already on the AUR.<br />
<br />
{{bc|<nowiki><br />
[esclinux]<br />
Server = http://download.tuxfamily.org/esclinuxcd/ressources/repo/i686/<br />
</nowiki>}}<br />
<br />
==== kpiche ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable OpenSync packages.<br />
<br />
{{bc|<nowiki><br />
[kpiche]<br />
Server = http://kpiche.archlinux.ca/repo<br />
</nowiki>}}<br />
<br />
==== kernel26-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 2.6.39<br />
<br />
{{bc|<nowiki><br />
[kernel26-pae]<br />
Server = http://kernel26-pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== linux-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 3.0<br />
<br />
{{bc|<nowiki><br />
[linux-pae]<br />
Server = http://pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== rfad ====<br />
<br />
* '''Maintainer:''' requiem [at] archlinux.us <br />
* '''Description:''' Repository made by haxit<br />
<br />
{{bc|<nowiki><br />
[rfad]<br />
Server = http://web.ncf.ca/ey723/archlinux/repo/<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/i686<br />
</nowiki>}}<br />
<br />
== x86_64 only ==<br />
<br />
=== Signed ===<br />
<br />
==== apathism ====<br />
<br />
* '''Maintainer:''' Koryabkin Ivan ([https://aur.archlinux.org/account/apathism/ apathism])<br />
* '''Upstream page:''' https://apathism.net/<br />
* '''Description:''' AUR packages that would take long to build, such as {{AUR|firefox-kde-opensuse}}.<br />
* '''Key-ID:''' 3E37398D<br />
* '''Keyfile:''' http://apathism.net/archlinux/apathism.key<br />
<br />
{{bc|<nowiki><br />
[apathism]<br />
Server = http://apathism.net/archlinux/<br />
</nowiki>}}<br />
<br />
==== bioinformatics ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some software tools for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics]<br />
Server = http://decryptedepsilon.bl.ee/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== freifunk-rheinland ====<br />
<br />
* '''Maintainer:''' nomaster<br />
* '''Description:''' Packages for the Freifunk project: batman-adv, batctl, fastd and dependencies.<br />
<br />
{{bc|<nowiki><br />
[freifunk-rheinland]<br />
Server = http://mirror.fluxent.de/archlinux-custom/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== heimdal ====<br />
{{Note|Offline since 2014-03-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Packages are compiled against Heimdal instead of MIT KRB5. Meant to be dropped before {{ic|[core]}} in {{ic|pacman.conf}}. All packages are signed.<br />
* '''Upstream page:''' https://github.com/Kiwilight/Heimdal-Pkgbuilds<br />
{{Warning|Be careful. Do not use this unless you know what you are doing because many of these packages override packages from the ''core'' and ''extra'' repositories}}<br />
<br />
{{bc|<nowiki><br />
[heimdal]<br />
Server = http://www.kiwilight.com/heimdal/$arch/<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-multilib ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle multilib repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-multilib]<br />
Server = http://bohoomil.com/repo/multilib/$arch<br />
</nowiki>}}<br />
<br />
==== siosm-aur ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages also available in the Arch User Repository, sometimes with minor fixes<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-aur]<br />
Server = http://repo.siosm.fr/$repo/<br />
</nowiki>}}<br />
<br />
==== siosm-selinux ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages required for SELinux support – work in progress (notably, missing an Arch Linux-compatible SELinux policy). See the [[SELinux]] page for details.<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-selinux]<br />
Server = http://repo.siosm.fr/$repo/<br />
</nowiki>}}<br />
<br />
==== subtitlecomposer ====<br />
<br />
* '''Maintainer:''' Mladen Milinkovic (maxrd2)<br />
* '''Description:''' Subtitle Composer stable and nightly builds<br />
* '''Upstream page:''' https://github.com/maxrd2/subtitlecomposer<br />
* '''Key-ID:''' EA8CEBEE<br />
<br />
{{bc|<nowiki><br />
[subtitlecomposer]<br />
Server = http://smoothware.net/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== xyne-x86_64 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "x86_64" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-x86_64]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' contains programs I'm using on many systems<br />
* '''Upstream page:''' http://andrwe.dyndns.org/doku.php/blog/repository {{Dead link|2013|11|30}}<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/x86_64<br />
</nowiki>}}<br />
<br />
==== archstudio ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Audio and Music Packages optimized for Intel Core i3, i5, and i7.<br />
* '''Upstream page:''' http://www.xsounds.org/~archstudio<br />
<br />
{{bc|<nowiki><br />
[archstudio]<br />
Server = http://www.xsounds.org/~archstudio/x86_64<br />
</nowiki>}}<br />
<br />
==== brtln ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bpiotrowski Bartłomiej Piotrowski]<br />
* '''Description:''' Alpha releases of MariaDB, Wine with win32 support only, and some VCS packages.<br />
<br />
{{bc|<nowiki><br />
[brtln]<br />
Server = http://pkgbuild.com/~barthalion/brtln/$arch/<br />
</nowiki>}}<br />
<br />
==== hawaii ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' hawaii Qt5/Wayland-based desktop environment<br />
* '''Upstream page:''' http://www.maui-project.org/<br />
<br />
{{bc|<nowiki><br />
[hawaii]<br />
Server = http://archive.maui-project.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== pnsft-pur ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Japanese input method packages Mozc (vanilla) and libkkc<br />
<br />
{{bc|<nowiki><br />
[pnsft-pur]<br />
Server = http://downloads.sourceforge.net/project/pnsft-aur/pur/x86_64<br />
</nowiki>}}<br />
<br />
==== mingw-w64 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Almost all mingw-w64 packages in the AUR updated every 8 hours.<br />
* '''Upstream page:''' http://arch.linuxx.org<br />
<br />
{{bc|<nowiki><br />
[mingw-w64]<br />
Server = http://downloads.sourceforge.net/project/mingw-w64-archlinux/$arch<br />
Server = http://arch.linuxx.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== rightscale ====<br />
<br />
* '''Maintainer:''' Chris Fordham <chris@fordham-nagy.id.au><br />
* '''Description:''' Packages for RightScale including the RightLink cloud instance agent. Install the package, rightscale-agent.<br />
<br />
{{bc|<nowiki><br />
[rightscale]<br />
Server = https://s3-us-west-1.amazonaws.com/archlinux-rightscale/$arch<br />
</nowiki>}}<br />
<br />
==== seiichiro ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' VDR and some plugins, mms, foo2zjs-drivers<br />
<br />
{{bc|<nowiki><br />
[seiichiro]<br />
Server = http://repo.seiichiro0185.org/x86_64<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== zen ====<br />
{{Note|Offline since 2014-03-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Various and zengeist AUR packages.<br />
<br />
{{bc|<nowiki><br />
[zen]<br />
Server = http://zloduch.cz/archlinux/x86_64<br />
</nowiki>}}<br />
<br />
==== miusystem ====<br />
<br />
* '''Maintainer:''' Theodore Keloglou <thodoris-12@hotmail.com><br />
* '''Description:''' Packages that I use and might interest others<br />
<br />
{{bc|<nowiki><br />
[miusystem]<br />
Server = https://miusystem.com/archlinux-repo<br />
</nowiki>}}<br />
<br />
== armv6h only ==<br />
<br />
=== Unsigned ===<br />
<br />
==== arch-fook-armv6h ====<br />
<br />
* '''Maintainer:''' Jaska Kivelä <jaska@kivela.net><br />
* '''Description:''' Stuff that I have compiled for my Raspberry PI. Including Enlightenment and home automation stuff.<br />
<br />
{{bc|<nowiki><br />
[arch-fook-armv6h]<br />
Server = http://kivela.net/jaska/arch-fook-armv6h<br />
</nowiki>}}</div>Demizerhttps://wiki.archlinux.org/index.php?title=Install_Arch_Linux_on_ZFS&diff=311513Install Arch Linux on ZFS2014-04-22T23:41:05Z<p>Demizer: /* Installation */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
This article details the steps required to install Arch Linux onto a root ZFS filesystem. This article supplements the [[Beginners' guide]].<br />
<br />
== Installation ==<br />
<br />
See [[ZFS#Installation]] for installing the ZFS packages. If installing Arch Linux onto ZFS from the archiso, it would be easier to use the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
=== Embedding archzfs into archiso ===<br />
<br />
See [[ZFS#Embed_the_archzfs_packages_into_an_archiso|ZFS]] article.<br />
<br />
== Partition the destination drive ==<br />
<br />
Review [[Beginners' guide#Prepare_the_storage_drive]] for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.<br />
<br />
ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type {{ic|bf00}}, or "Solaris Root".<br />
<br />
=== Partition scheme ===<br />
<br />
Here is an example, using MBR, of a basic partition scheme that could be employed for your ZFS root setup:<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
Here is an example using GPT. The BIOS boot partition contains the bootloader.<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 2M BIOS boot partition (ef02)<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
An additional partition may be required depending on your hardware and chosen bootloader. Consult [[Beginners' guide#Install_and_configure_a_bootloader]] for more info.<br />
<br />
{{Tip|Bootloaders with support for ZFS are described in [[#Install and configure the bootloader]].}}<br />
<br />
== Format the destination disk ==<br />
<br />
Format the boot partition as well as any other system partitions. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.<br />
<br />
== Setup the ZFS filesystem ==<br />
<br />
First, make sure the ZFS modules are loaded,<br />
<br />
# modprobe zfs<br />
<br />
=== Create the root zpool ===<br />
<br />
# zpool create zroot /dev/disk/by-id/''id-to-partition''<br />
<br />
{{Warning|Always use id names when working with ZFS, otherwise import errors will occur.}}<br />
<br />
=== Create necessary filesystems ===<br />
<br />
If so desired, sub-filesystem mount points such as {{ic|/home}} and {{ic|/root}} can be created with the following commands:<br />
<br />
# zfs create zroot/home -o mountpoint=/home<br />
# zfs create zroot/root -o mountpoint=/root<br />
<br />
Note that if you want to use other datasets for system directories ({{ic|/var}} or {{ic|/etc}} included) your system will not boot unless they are listed in {{ic|/etc/fstab}}! We will address that at the appropriate time in this tutorial.<br />
<br />
=== Swap partition ===<br />
<br />
ZFS does not allow the use swapfiles, but it is possible to use a ZFS volume as swap partition. It is important to set the ZVOL block size to match the system page size; for x86_64 systems that is 4k.<br />
<br />
Create a 8 GB (or whatever is required) ZFS volume:<br />
<br />
# zfs create -V 8G -b 4K zroot/swap<br />
<br />
Initialize and enable the volume as a swap partition:<br />
<br />
# mkswap /dev/zvol/zroot/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
After using {{ic|pacstrap}} to install the base system, edit {{ic|/zroot/etc/fstab}} to ensure the swap partition is mounted at boot:<br />
<br />
/dev/zvol/zroot/swap none swap defaults 0 0<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
<br />
# zfs umount -a<br />
<br />
=== Configure the root filesystem ===<br />
<br />
First, set the mount point of the root filesystem:<br />
<br />
# zfs set mountpoint=/ zroot<br />
<br />
and optionally, any sub-filesystems:<br />
<br />
# zfs set mountpoint=/home zroot/home<br />
# zfs set mountpoint=/root zroot/root<br />
<br />
and if you have seperate datasets for system directories (ie {{ic|/var}} or {{ic|/usr}})<br />
<br />
# zfs set mountpoint=legacy zroot/usr<br />
# zfs set mountpoint=legacy zroot/var<br />
<br />
and put them in {{ic|/etc/fstab}}<br />
{{hc|/etc/fstab|<br />
# <file system> <dir> <type> <options> <dump> <pass><br />
zroot/usr /usr zfs defaults,noatime 0 0<br />
zroot/var /var zfs defaults,noatime 0 0}}<br />
<br />
Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.<br />
<br />
# zpool set bootfs=zroot zroot<br />
<br />
Export the pool,<br />
<br />
# zpool export zroot<br />
<br />
{{Warning|Do not skip this, otherwise you will be required to use {{ic|-f}} when importing your pools. This unloads the imported pool.}}<br />
{{Note|This might fail if you added a swap partition above. Need to turn it off with the ''swapoff'' command.}}<br />
<br />
Finally, re-import the pool,<br />
<br />
# zpool import -d /dev/disk/by-id -R /mnt zroot<br />
<br />
{{Note|{{ic|-d}} is not the actual device id, but the {{ic|/dev/by-id}} directory containing the symbolic links.}}<br />
<br />
If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.<br />
<br />
Be sure to bring the zpool.cache file into your new system. This is required later for the ZFS daemon to start.<br />
<br />
# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache<br />
<br />
if you don't have /etc/zfs/zpool.cache, create it:<br />
<br />
# zpool set cachefile=/etc/zfs/zpool.cache zroot<br />
<br />
== Install and configure Arch Linux ==<br />
<br />
Follow the following steps using the [[Beginners' guide]]. It will be noted where special consideration must be taken for ZFSonLinux.<br />
<br />
* First mount any boot or system partitions using the mount command.<br />
<br />
* Install the base system.<br />
<br />
* The procedure described in [[Beginners' guide#Generate an fstab]] is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in {{ic|fstab}} file, unless the user made datasets of system directories. To generate the {{ic|fstab}} for filesystems, use:<br />
# genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab<br />
<br />
* Edit the {{ic|/etc/fstab}}:<br />
<br />
{{Note|<br />
* If you chose to create datasets for system directories, keep them in this {{ic|fstab}}! Comment out the lines for the '{{ic|/}}, {{ic|/root}}, and {{ic|/home}} mountpoints, rather than deleting them. You may need those UUIDs later if something goes wrong.<br />
* Anyone who just stuck with the guide's directions can delete everything except for the swap file and the boot/EFI partition. It seems convention to replace the swap's uuid with {{ic|/dev/zvol/zroot/swap}}.<br />
}}<br />
<br />
* When creating the initial ramdisk, first edit {{ic|/etc/mkinitcpio.conf}} and add {{ic|zfs}} before filesystems. Also, move {{ic|keyboard}} hook before {{ic|zfs}} so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your {{ic|HOOKS}} line should look something like this:<br />
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"<br />
<br />
* Regenerate the initramfs with the command:<br />
# mkinitcpio -p linux<br />
<br />
== Install and configure the bootloader ==<br />
<br />
=== For BIOS motherboards ===<br />
<br />
Follow [[GRUB#BIOS_systems_2]] to install GRUB onto your disk. {{ic|grub-mkconfig}} does not properly detect the ZFS filesystem, so it is necessary to edit {{ic|grub.cfg}} manually:<br />
<br />
{{hc|/boot/grub/grub.cfg|<nowiki><br />
set timeout=2<br />
set default=0<br />
<br />
# (0) Arch Linux<br />
menuentry "Arch Linux" {<br />
set root=(hd0,msdos1)<br />
linux /vmlinuz-linux zfs=zroot rw<br />
initrd /initramfs-linux.img<br />
}<br />
</nowiki>}}<br />
<br />
if you did not create a separate /boot participation, kernel and initrd paths have to be in the following format:<br />
<br />
/dataset/@/actual/path <br />
<br />
Example:<br />
<br />
linux /@/boot/vmlinuz-linux zfs=zroot rw<br />
initrd /@/boot/initramfs-linux.img<br />
=== For UEFI motherboards ===<br />
<br />
Use {{ic|EFISTUB}} and {{ic|rEFInd}} for the UEFI boot loader. See [[Beginners' guide#For UEFI motherboards]]. The kernel parameters in {{ic|refind_linux.conf}} for ZFS should include {{ic|1=zfs=bootfs}} or {{ic|1=zfs=zroot}} so the system can boot from ZFS. The {{ic|root}} and {{ic|rootfstype}} parameters are not needed.<br />
<br />
== Unmount and restart ==<br />
<br />
We are almost done!<br />
# exit<br />
# umount /mnt/boot<br />
# zfs umount -a<br />
# zpool export zroot<br />
Now reboot.<br />
<br />
{{Warning|If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.}}<br />
<br />
== After the first boot ==<br />
<br />
If everything went fine up to this point, your system will boot. Once.<br />
For your system to be able to reboot without issues, you need to enable the {{ic|zfs}} service and set the hostid.<br />
<br />
When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding {{ic|<nowiki>spl.spl_hostid=0x00bab10c</nowiki>}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# hostid > /etc/hostid<br />
# mkinitcpio -p linux<br />
<br />
Your system should work and reboot properly now.<br />
<br />
== See also ==<br />
<br />
* [https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem HOWTO install Ubuntu to a Native ZFS Root]<br />
* [http://lildude.co.uk/zfs-cheatsheet ZFS cheatsheet]<br />
* [http://www.funtoo.org/wiki/ZFS_Install_Guide Funtoo ZFS install guide]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Install_Arch_Linux_on_ZFS&diff=311512Install Arch Linux on ZFS2014-04-22T23:40:17Z<p>Demizer: /* Installing archzfs */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
This article details the steps required to install Arch Linux onto a root ZFS filesystem. This article supplements the [[Beginners' guide]].<br />
<br />
== Installation ==<br />
<br />
See [[zfs#Installation]] for installing the ZFS packages. If installing Arch Linux onto ZFS from the archiso, it would be easier to use the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
=== Embedding archzfs into archiso ===<br />
<br />
See [[ZFS#Embed_the_archzfs_packages_into_an_archiso|ZFS]] article.<br />
<br />
== Partition the destination drive ==<br />
<br />
Review [[Beginners' guide#Prepare_the_storage_drive]] for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.<br />
<br />
ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type {{ic|bf00}}, or "Solaris Root".<br />
<br />
=== Partition scheme ===<br />
<br />
Here is an example, using MBR, of a basic partition scheme that could be employed for your ZFS root setup:<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
Here is an example using GPT. The BIOS boot partition contains the bootloader.<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 2M BIOS boot partition (ef02)<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
An additional partition may be required depending on your hardware and chosen bootloader. Consult [[Beginners' guide#Install_and_configure_a_bootloader]] for more info.<br />
<br />
{{Tip|Bootloaders with support for ZFS are described in [[#Install and configure the bootloader]].}}<br />
<br />
== Format the destination disk ==<br />
<br />
Format the boot partition as well as any other system partitions. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.<br />
<br />
== Setup the ZFS filesystem ==<br />
<br />
First, make sure the ZFS modules are loaded,<br />
<br />
# modprobe zfs<br />
<br />
=== Create the root zpool ===<br />
<br />
# zpool create zroot /dev/disk/by-id/''id-to-partition''<br />
<br />
{{Warning|Always use id names when working with ZFS, otherwise import errors will occur.}}<br />
<br />
=== Create necessary filesystems ===<br />
<br />
If so desired, sub-filesystem mount points such as {{ic|/home}} and {{ic|/root}} can be created with the following commands:<br />
<br />
# zfs create zroot/home -o mountpoint=/home<br />
# zfs create zroot/root -o mountpoint=/root<br />
<br />
Note that if you want to use other datasets for system directories ({{ic|/var}} or {{ic|/etc}} included) your system will not boot unless they are listed in {{ic|/etc/fstab}}! We will address that at the appropriate time in this tutorial.<br />
<br />
=== Swap partition ===<br />
<br />
ZFS does not allow the use swapfiles, but it is possible to use a ZFS volume as swap partition. It is important to set the ZVOL block size to match the system page size; for x86_64 systems that is 4k.<br />
<br />
Create a 8 GB (or whatever is required) ZFS volume:<br />
<br />
# zfs create -V 8G -b 4K zroot/swap<br />
<br />
Initialize and enable the volume as a swap partition:<br />
<br />
# mkswap /dev/zvol/zroot/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
After using {{ic|pacstrap}} to install the base system, edit {{ic|/zroot/etc/fstab}} to ensure the swap partition is mounted at boot:<br />
<br />
/dev/zvol/zroot/swap none swap defaults 0 0<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
<br />
# zfs umount -a<br />
<br />
=== Configure the root filesystem ===<br />
<br />
First, set the mount point of the root filesystem:<br />
<br />
# zfs set mountpoint=/ zroot<br />
<br />
and optionally, any sub-filesystems:<br />
<br />
# zfs set mountpoint=/home zroot/home<br />
# zfs set mountpoint=/root zroot/root<br />
<br />
and if you have seperate datasets for system directories (ie {{ic|/var}} or {{ic|/usr}})<br />
<br />
# zfs set mountpoint=legacy zroot/usr<br />
# zfs set mountpoint=legacy zroot/var<br />
<br />
and put them in {{ic|/etc/fstab}}<br />
{{hc|/etc/fstab|<br />
# <file system> <dir> <type> <options> <dump> <pass><br />
zroot/usr /usr zfs defaults,noatime 0 0<br />
zroot/var /var zfs defaults,noatime 0 0}}<br />
<br />
Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.<br />
<br />
# zpool set bootfs=zroot zroot<br />
<br />
Export the pool,<br />
<br />
# zpool export zroot<br />
<br />
{{Warning|Do not skip this, otherwise you will be required to use {{ic|-f}} when importing your pools. This unloads the imported pool.}}<br />
{{Note|This might fail if you added a swap partition above. Need to turn it off with the ''swapoff'' command.}}<br />
<br />
Finally, re-import the pool,<br />
<br />
# zpool import -d /dev/disk/by-id -R /mnt zroot<br />
<br />
{{Note|{{ic|-d}} is not the actual device id, but the {{ic|/dev/by-id}} directory containing the symbolic links.}}<br />
<br />
If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.<br />
<br />
Be sure to bring the zpool.cache file into your new system. This is required later for the ZFS daemon to start.<br />
<br />
# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache<br />
<br />
if you don't have /etc/zfs/zpool.cache, create it:<br />
<br />
# zpool set cachefile=/etc/zfs/zpool.cache zroot<br />
<br />
== Install and configure Arch Linux ==<br />
<br />
Follow the following steps using the [[Beginners' guide]]. It will be noted where special consideration must be taken for ZFSonLinux.<br />
<br />
* First mount any boot or system partitions using the mount command.<br />
<br />
* Install the base system.<br />
<br />
* The procedure described in [[Beginners' guide#Generate an fstab]] is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in {{ic|fstab}} file, unless the user made datasets of system directories. To generate the {{ic|fstab}} for filesystems, use:<br />
# genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab<br />
<br />
* Edit the {{ic|/etc/fstab}}:<br />
<br />
{{Note|<br />
* If you chose to create datasets for system directories, keep them in this {{ic|fstab}}! Comment out the lines for the '{{ic|/}}, {{ic|/root}}, and {{ic|/home}} mountpoints, rather than deleting them. You may need those UUIDs later if something goes wrong.<br />
* Anyone who just stuck with the guide's directions can delete everything except for the swap file and the boot/EFI partition. It seems convention to replace the swap's uuid with {{ic|/dev/zvol/zroot/swap}}.<br />
}}<br />
<br />
* When creating the initial ramdisk, first edit {{ic|/etc/mkinitcpio.conf}} and add {{ic|zfs}} before filesystems. Also, move {{ic|keyboard}} hook before {{ic|zfs}} so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your {{ic|HOOKS}} line should look something like this:<br />
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"<br />
<br />
* Regenerate the initramfs with the command:<br />
# mkinitcpio -p linux<br />
<br />
== Install and configure the bootloader ==<br />
<br />
=== For BIOS motherboards ===<br />
<br />
Follow [[GRUB#BIOS_systems_2]] to install GRUB onto your disk. {{ic|grub-mkconfig}} does not properly detect the ZFS filesystem, so it is necessary to edit {{ic|grub.cfg}} manually:<br />
<br />
{{hc|/boot/grub/grub.cfg|<nowiki><br />
set timeout=2<br />
set default=0<br />
<br />
# (0) Arch Linux<br />
menuentry "Arch Linux" {<br />
set root=(hd0,msdos1)<br />
linux /vmlinuz-linux zfs=zroot rw<br />
initrd /initramfs-linux.img<br />
}<br />
</nowiki>}}<br />
<br />
if you did not create a separate /boot participation, kernel and initrd paths have to be in the following format:<br />
<br />
/dataset/@/actual/path <br />
<br />
Example:<br />
<br />
linux /@/boot/vmlinuz-linux zfs=zroot rw<br />
initrd /@/boot/initramfs-linux.img<br />
=== For UEFI motherboards ===<br />
<br />
Use {{ic|EFISTUB}} and {{ic|rEFInd}} for the UEFI boot loader. See [[Beginners' guide#For UEFI motherboards]]. The kernel parameters in {{ic|refind_linux.conf}} for ZFS should include {{ic|1=zfs=bootfs}} or {{ic|1=zfs=zroot}} so the system can boot from ZFS. The {{ic|root}} and {{ic|rootfstype}} parameters are not needed.<br />
<br />
== Unmount and restart ==<br />
<br />
We are almost done!<br />
# exit<br />
# umount /mnt/boot<br />
# zfs umount -a<br />
# zpool export zroot<br />
Now reboot.<br />
<br />
{{Warning|If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.}}<br />
<br />
== After the first boot ==<br />
<br />
If everything went fine up to this point, your system will boot. Once.<br />
For your system to be able to reboot without issues, you need to enable the {{ic|zfs}} service and set the hostid.<br />
<br />
When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding {{ic|<nowiki>spl.spl_hostid=0x00bab10c</nowiki>}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# hostid > /etc/hostid<br />
# mkinitcpio -p linux<br />
<br />
Your system should work and reboot properly now.<br />
<br />
== See also ==<br />
<br />
* [https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem HOWTO install Ubuntu to a Native ZFS Root]<br />
* [http://lildude.co.uk/zfs-cheatsheet ZFS cheatsheet]<br />
* [http://www.funtoo.org/wiki/ZFS_Install_Guide Funtoo ZFS install guide]</div>Demizerhttps://wiki.archlinux.org/index.php?title=Install_Arch_Linux_on_ZFS&diff=311510Install Arch Linux on ZFS2014-04-22T23:36:49Z<p>Demizer: Remove section about using the archzfs repository.</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
This article details the steps required to install Arch Linux onto a root ZFS filesystem. This article supplements the [[Beginners' guide]].<br />
<br />
== Installing archzfs ==<br />
<br />
Using the archzfs repository is highly recommended for effortless updates.<br />
<br />
{{Warning|The ZFS packages are tied to the kernel version they were built against. This means it will not be possible to perform kernel updates until new packages (or package sources) are released by the ZFS package maintainer.}}<br />
<br />
{{Note|1=This guide uses the unofficial archzfs repository hosted at http://demizerone.com/demz-repo-core. This repository is maintained by Jesus Alvarez and is signed with his PGP key: [http://pgp.mit.edu:11371/pks/lookup?op=vindex&search=0x5E1ABF240EE7A126 0EE7A126].}}<br />
<br />
=== Embedding archzfs into archiso ===<br />
<br />
See [[ZFS#Embed_the_archzfs_packages_into_an_archiso|ZFS]] article.<br />
<br />
== Partition the destination drive ==<br />
<br />
Review [[Beginners' guide#Prepare_the_storage_drive]] for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.<br />
<br />
ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type {{ic|bf00}}, or "Solaris Root".<br />
<br />
=== Partition scheme ===<br />
<br />
Here is an example, using MBR, of a basic partition scheme that could be employed for your ZFS root setup:<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
Here is an example using GPT. The BIOS boot partition contains the bootloader.<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 2M BIOS boot partition (ef02)<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
An additional partition may be required depending on your hardware and chosen bootloader. Consult [[Beginners' guide#Install_and_configure_a_bootloader]] for more info.<br />
<br />
{{Tip|Bootloaders with support for ZFS are described in [[#Install and configure the bootloader]].}}<br />
<br />
== Format the destination disk ==<br />
<br />
Format the boot partition as well as any other system partitions. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.<br />
<br />
== Setup the ZFS filesystem ==<br />
<br />
First, make sure the ZFS modules are loaded,<br />
<br />
# modprobe zfs<br />
<br />
=== Create the root zpool ===<br />
<br />
# zpool create zroot /dev/disk/by-id/''id-to-partition''<br />
<br />
{{Warning|Always use id names when working with ZFS, otherwise import errors will occur.}}<br />
<br />
=== Create necessary filesystems ===<br />
<br />
If so desired, sub-filesystem mount points such as {{ic|/home}} and {{ic|/root}} can be created with the following commands:<br />
<br />
# zfs create zroot/home -o mountpoint=/home<br />
# zfs create zroot/root -o mountpoint=/root<br />
<br />
Note that if you want to use other datasets for system directories ({{ic|/var}} or {{ic|/etc}} included) your system will not boot unless they are listed in {{ic|/etc/fstab}}! We will address that at the appropriate time in this tutorial.<br />
<br />
=== Swap partition ===<br />
<br />
ZFS does not allow the use swapfiles, but it is possible to use a ZFS volume as swap partition. It is important to set the ZVOL block size to match the system page size; for x86_64 systems that is 4k.<br />
<br />
Create a 8 GB (or whatever is required) ZFS volume:<br />
<br />
# zfs create -V 8G -b 4K zroot/swap<br />
<br />
Initialize and enable the volume as a swap partition:<br />
<br />
# mkswap /dev/zvol/zroot/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
After using {{ic|pacstrap}} to install the base system, edit {{ic|/zroot/etc/fstab}} to ensure the swap partition is mounted at boot:<br />
<br />
/dev/zvol/zroot/swap none swap defaults 0 0<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
<br />
# zfs umount -a<br />
<br />
=== Configure the root filesystem ===<br />
<br />
First, set the mount point of the root filesystem:<br />
<br />
# zfs set mountpoint=/ zroot<br />
<br />
and optionally, any sub-filesystems:<br />
<br />
# zfs set mountpoint=/home zroot/home<br />
# zfs set mountpoint=/root zroot/root<br />
<br />
and if you have seperate datasets for system directories (ie {{ic|/var}} or {{ic|/usr}})<br />
<br />
# zfs set mountpoint=legacy zroot/usr<br />
# zfs set mountpoint=legacy zroot/var<br />
<br />
and put them in {{ic|/etc/fstab}}<br />
{{hc|/etc/fstab|<br />
# <file system> <dir> <type> <options> <dump> <pass><br />
zroot/usr /usr zfs defaults,noatime 0 0<br />
zroot/var /var zfs defaults,noatime 0 0}}<br />
<br />
Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.<br />
<br />
# zpool set bootfs=zroot zroot<br />
<br />
Export the pool,<br />
<br />
# zpool export zroot<br />
<br />
{{Warning|Do not skip this, otherwise you will be required to use {{ic|-f}} when importing your pools. This unloads the imported pool.}}<br />
{{Note|This might fail if you added a swap partition above. Need to turn it off with the ''swapoff'' command.}}<br />
<br />
Finally, re-import the pool,<br />
<br />
# zpool import -d /dev/disk/by-id -R /mnt zroot<br />
<br />
{{Note|{{ic|-d}} is not the actual device id, but the {{ic|/dev/by-id}} directory containing the symbolic links.}}<br />
<br />
If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.<br />
<br />
Be sure to bring the zpool.cache file into your new system. This is required later for the ZFS daemon to start.<br />
<br />
# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache<br />
<br />
if you don't have /etc/zfs/zpool.cache, create it:<br />
<br />
# zpool set cachefile=/etc/zfs/zpool.cache zroot<br />
<br />
== Install and configure Arch Linux ==<br />
<br />
Follow the following steps using the [[Beginners' guide]]. It will be noted where special consideration must be taken for ZFSonLinux.<br />
<br />
* First mount any boot or system partitions using the mount command.<br />
<br />
* Install the base system.<br />
<br />
* The procedure described in [[Beginners' guide#Generate an fstab]] is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in {{ic|fstab}} file, unless the user made datasets of system directories. To generate the {{ic|fstab}} for filesystems, use:<br />
# genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab<br />
<br />
* Edit the {{ic|/etc/fstab}}:<br />
<br />
{{Note|<br />
* If you chose to create datasets for system directories, keep them in this {{ic|fstab}}! Comment out the lines for the '{{ic|/}}, {{ic|/root}}, and {{ic|/home}} mountpoints, rather than deleting them. You may need those UUIDs later if something goes wrong.<br />
* Anyone who just stuck with the guide's directions can delete everything except for the swap file and the boot/EFI partition. It seems convention to replace the swap's uuid with {{ic|/dev/zvol/zroot/swap}}.<br />
}}<br />
<br />
* When creating the initial ramdisk, first edit {{ic|/etc/mkinitcpio.conf}} and add {{ic|zfs}} before filesystems. Also, move {{ic|keyboard}} hook before {{ic|zfs}} so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your {{ic|HOOKS}} line should look something like this:<br />
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"<br />
<br />
* Regenerate the initramfs with the command:<br />
# mkinitcpio -p linux<br />
<br />
== Install and configure the bootloader ==<br />
<br />
=== For BIOS motherboards ===<br />
<br />
Follow [[GRUB#BIOS_systems_2]] to install GRUB onto your disk. {{ic|grub-mkconfig}} does not properly detect the ZFS filesystem, so it is necessary to edit {{ic|grub.cfg}} manually:<br />
<br />
{{hc|/boot/grub/grub.cfg|<nowiki><br />
set timeout=2<br />
set default=0<br />
<br />
# (0) Arch Linux<br />
menuentry "Arch Linux" {<br />
set root=(hd0,msdos1)<br />
linux /vmlinuz-linux zfs=zroot rw<br />
initrd /initramfs-linux.img<br />
}<br />
</nowiki>}}<br />
<br />
if you did not create a separate /boot participation, kernel and initrd paths have to be in the following format:<br />
<br />
/dataset/@/actual/path <br />
<br />
Example:<br />
<br />
linux /@/boot/vmlinuz-linux zfs=zroot rw<br />
initrd /@/boot/initramfs-linux.img<br />
=== For UEFI motherboards ===<br />
<br />
Use {{ic|EFISTUB}} and {{ic|rEFInd}} for the UEFI boot loader. See [[Beginners' guide#For UEFI motherboards]]. The kernel parameters in {{ic|refind_linux.conf}} for ZFS should include {{ic|1=zfs=bootfs}} or {{ic|1=zfs=zroot}} so the system can boot from ZFS. The {{ic|root}} and {{ic|rootfstype}} parameters are not needed.<br />
<br />
== Unmount and restart ==<br />
<br />
We are almost done!<br />
# exit<br />
# umount /mnt/boot<br />
# zfs umount -a<br />
# zpool export zroot<br />
Now reboot.<br />
<br />
{{Warning|If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.}}<br />
<br />
== After the first boot ==<br />
<br />
If everything went fine up to this point, your system will boot. Once.<br />
For your system to be able to reboot without issues, you need to enable the {{ic|zfs}} service and set the hostid.<br />
<br />
When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding {{ic|<nowiki>spl.spl_hostid=0x00bab10c</nowiki>}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# hostid > /etc/hostid<br />
# mkinitcpio -p linux<br />
<br />
Your system should work and reboot properly now.<br />
<br />
== See also ==<br />
<br />
* [https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem HOWTO install Ubuntu to a Native ZFS Root]<br />
* [http://lildude.co.uk/zfs-cheatsheet ZFS cheatsheet]<br />
* [http://www.funtoo.org/wiki/ZFS_Install_Guide Funtoo ZFS install guide]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS/Virtual_disks&diff=311509ZFS/Virtual disks2014-04-22T23:33:44Z<p>Demizer: Change ZFS installation link.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article covers some basic tasks and usage of ZFS. It differs from the main article [[ZFS]] somewhat in that the examples herein are demonstrated on a zpool built from virtual disks. So long as users do not place any critical data on the resulting zpool, they are free to experiment without fear of actual data loss.<br />
<br />
The examples in this article are shown with a set of virtual discs known in ZFS terms as VDEVs. Users may create their VDEVs either on an existing physical disk or in tmpfs (RAMdisk) depending on the amount of free memory on the system.<br />
<br />
{{Note|Using a file as a VDEV is a great method to play with ZFS but isn't viable strategy for storing "real" data.}}<br />
<br />
== Install the ZFS Family of Packages ==<br />
Due to differences in licencing, ZFS bins and kernel modules are easily distributed from source, but no-so-easily packaged as pre-compiled sets. The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS#Installation]] article.<br />
<br />
== Creating and Destroying Zpools ==<br />
Management of ZFS is pretty simplistic with only two utils needed:<br />
* {{ic|/usr/bin/zpool}}<br />
* {{ic|/usr/bin/zfs}}<br />
<br />
=== Mirror ===<br />
For zpools with just two drives, it is recommended to use ZFS in ''mirror'' mode which functions like a RAID0 mirroring the data. While this configuration is fine, higher RAIDZ levels are recommended.<br />
<br />
=== RAIDZ1 ===<br />
The minimum number of drives for a RAIDZ1 is three. It's best to follow the "power of two plus parity" recommendation. This is for storage space efficiency and hitting the "sweet spot" in performance. For RAIDZ-1, use three (2+1), five (4+1), or nine (8+1) disks. This example will use the most simplistic set of (2+1).<br />
<br />
Create three x 2G files to serve as virtual hardrives:<br />
$ for i in {1..3}; do truncate -s 2G /scratch/$i.img; done<br />
<br />
Assemble the RAIDZ1:<br />
# zpool create zpool raidz1 /scratch/1.img /scratch/2.img /scratch/3.img<br />
<br />
Notice that a 3.91G zpool has been created and mounted for us:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
test 139K 3.91G 38.6K /zpool<br />
</nowiki>}}<br />
<br />
The status of the device can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
To destroy a zpool:<br />
# zpool destroy zpool<br />
<br />
===RAIDZ2 and RAIDZ3===<br />
Higher level ZRAIDs can be assembled in a like fashion by adjusting the for statement to create the image files, by specifying "raidz2" or "raidz3" in the creation step, and by appending the additional image files to the creation step.<br />
<br />
Summarizing Toponce's guidance:<br />
* RAIDZ2 should use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks.<br />
* RAIDZ3 should use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks.<br />
<br />
== Displaying and Setting Properties ==<br />
Without specifying them in the creation step, users can set properties of their zpools at any time after its creation using {{ic|/usr/bin/zfs}}.<br />
<br />
=== Show Properties ===<br />
To see the current properties of a given zpool:<br />
{{hc|# zfs get all zpool|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool type filesystem -<br />
zpool creation Sun Oct 20 8:46 2013 -<br />
zpool used 139K -<br />
zpool available 3.91G -<br />
zpool referenced 38.6K -<br />
zpool compressratio 1.00x -<br />
zpool mounted yes -<br />
zpool quota none default<br />
zpool reservation none default<br />
zpool recordsize 128K default<br />
zpool mountpoint /zpool default<br />
zpool sharenfs off default<br />
zpool checksum on default<br />
zpool compression off default<br />
zpool atime on default<br />
zpool devices on default<br />
zpool exec on default<br />
zpool setuid on default<br />
zpool readonly off default<br />
zpool zoned off default<br />
zpool snapdir hidden default<br />
zpool aclinherit restricted default<br />
zpool canmount on default<br />
zpool xattr on default<br />
zpool copies 1 default<br />
zpool version 5 -<br />
zpool utf8only off -<br />
zpool normalization none -<br />
zpool casesensitivity sensitive -<br />
zpool vscan off default<br />
zpool nbmand off default<br />
zpool sharesmb off default<br />
zpool refquota none default<br />
zpool refreservation none default<br />
zpool primarycache all default<br />
zpool secondarycache all default<br />
zpool usedbysnapshots 0 -<br />
zpool usedbydataset 38.6K -<br />
zpool usedbychildren 99.9K -<br />
zpool usedbyrefreservation 0 -<br />
zpool logbias latency default<br />
zpool dedup off default<br />
zpool mlslabel none default<br />
zpool sync standard default<br />
zpool refcompressratio 1.00x -<br />
zpool written 38.6K -<br />
zpool snapdev hidden default<br />
</nowiki>}}<br />
<br />
=== Modify properties ===<br />
Disable the recording of access time in the zpool:<br />
# zfs set atime=off zpool<br />
<br />
Verify that the property has been set on the zpool:<br />
{{hc|# zfs get atime|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool atime off local<br />
</nowiki>}}<br />
<br />
{{Tip|This option like many others can be toggled off when creating the zpool as well by appending the following to the creation step: -O atime-off}}<br />
<br />
== Add Content to the Zpool and Query Compression Performance==<br />
Fill the zpool with files. For this example, first enable compression. ZFS uses many compression types, including, lzjb, gzip, gzip-N, zle, and lz4. Using a setting of simply 'on' will call the default algorithm (lzjb) but lz4 is a nice alternative. See the zfs man page for more.<br />
<br />
# zfs set compression=lz4 zpool<br />
<br />
In this example, the linux source tarball is copied over and since lz4 compression has been enabled on the zpool, the corresponding compression ratio can be queried as well.<br />
<br />
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.11.tar.xz<br />
$ tar xJf linux-3.11.tar.xz -C /zpool <br />
<br />
To see the compression ratio achieved:<br />
{{hc|# zfs get compressratio|<nowiki><br />
NAME PROPERTY VALUE SOURCE<br />
zpool compressratio 2.32x -<br />
</nowiki>}}<br />
<br />
== Simulate a Disk Failure and Rebuild the Zpool ==<br />
To simulate catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning), zero out one of the VDEVs.<br />
$ dd if=/dev/zero of=/scratch/2.img bs=4M count=1 2>/dev/null<br />
<br />
Since we used a blocksize (bs) of 4M, the once 2G image file is now a mere 4M:<br />
{{hc|$ ls -lh /scratch |<nowiki><br />
total 317M<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 1.img<br />
-rw-r--r-- 1 facade users 4.0M Oct 20 09:09 2.img<br />
-rw-r--r-- 1 facade users 2.0G Oct 20 09:13 3.img<br />
</nowiki>}}<br />
<br />
The zpool remains online despite the corruption. Note that if a physical disc does fail, dmesg and related logs would be full of errors. To detect when damage occurs, users must execute a scrub operation.<br />
<br />
# zpool scrub zpool<br />
<br />
Depending on the size and speed of the underlying media as well as the amount of data in the zpool, the scrub may take hours to complete.<br />
The status of the scrub can be queried:<br />
{{hc|# zpool status zpool|<nowiki><br />
pool: zpool<br />
state: DEGRADED<br />
status: One or more devices could not be used because the label is missing or<br />
invalid. Sufficient replicas exist for the pool to continue<br />
functioning in a degraded state.<br />
action: Replace the device using 'zpool replace'.<br />
see: http://zfsonlinux.org/msg/ZFS-8000-4J<br />
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct 20 09:13:39 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/2.img UNAVAIL 0 0 0 corrupted data<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
Since we zeroed out one of our VDEVs, let's simulate adding a new 2G HDD by creating a new image file and adding it to the zpool:<br />
$ truncate -s 2G /scratch/new.img<br />
# zpool replace zpool /scratch/2.img /scratch/new.img<br />
<br />
Upon replacing the VDEV with a new one, zpool rebuilds the data from the data and parity info in the remaining two good VDEVs. Check the status of this process:<br />
{{hc|# zpool status zpool |<nowiki><br />
pool: zpool<br />
state: ONLINE<br />
scan: resilvered 117M in 0h0m with 0 errors on Sun Oct 20 09:21:22 2013<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
zpool ONLINE 0 0 0<br />
raidz1-0 ONLINE 0 0 0<br />
/scratch/1.img ONLINE 0 0 0<br />
/scratch/new.img ONLINE 0 0 0<br />
/scratch/3.img ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
</nowiki>}}<br />
<br />
== Snapshots and Recovering Deleted Files ==<br />
Since ZFS is a copy-on-write filesystem, every file exists the second it is written. Saving changes to the very same file actually creates another copy of that file (plus the changes made). Snapshots can take advantage of this fact and allow users access to older versions of files provided a snapshot has been taken.<br />
<br />
{{Note|When using snapshots, many Linux programs that report on filesystem space such as '''df''' will report inaccurate results due to the unique way snapshots are used on ZFS. The output of {{ic|/usr/bin/zfs list}} will deliver an accurate report of the amount of available and free space on the zpool.}}<br />
<br />
<br />
To keep this simple, we will create a dataset within the zpool and snapshot it. Snapshots can be taken either of the entire zpool or of a dataset within the pool. They differ only in their naming conventions:<br />
<br />
{| class="wikitable" align="center"<br />
|-<br />
! Snapshot Target !! Snapshot Name<br />
|-<br />
| Entire zpool || zpool@snapshot-name<br />
|- <br />
| Dataset || zpool/dataset@snapshot-name<br />
|- <br />
|}<br />
<br />
Make a new data set and take ownership of it.<br />
# zfs create zpool/docs<br />
# chown facade:users /zpool/docs<br />
<br />
{{Note|The lack of a proceeding / in the create command is intentional, not a typo!}}<br />
=== Time 0 ===<br />
Add some files to the new dataset (/zpool/docs):<br />
$ wget -O /zpool/docs/Moby_Dick.txt http://www.gutenberg.org/ebooks/2701.txt.utf-8<br />
$ wget -O /zpool/docs/War_and_Peace.txt http://www.gutenberg.org/ebooks/2600.txt.utf-8<br />
$ wget -O /zpool/docs/Beowulf.txt http://www.gutenberg.org/ebooks/16328.txt.utf-8<br />
<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.06M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
This is showing that we have 4.92M of data used by our books in /zpool/docs.<br />
<br />
=== Time +1 ===<br />
Now take a snapshot of the dataset:<br />
# zfs snapshot zpool/docs@001<br />
<br />
Again run the list command:<br />
{{hc|# zfs list|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool 5.07M 3.91G 40.0K /zpool<br />
zpool/docs 4.92M 3.91G 4.92M /zpool/docs<br />
</nowiki>}}<br />
<br />
Note that the size in the USED col did not change showing that the snapshot take up no space in the zpool since nothing has changed in these three files.<br />
<br />
We can list out the snapshots like so and again confirm the snapshot is taking up no space, but instead '''refers to''' files from the originals that take up, 4.92M (their original size):<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 0 - 4.92M -<br />
</nowiki>}}<br />
<br />
=== Time +2 ===<br />
Now let's add some additional content and create a new snapshot:<br />
$ wget -O /zpool/docs/Les_Mis.txt http://www.gutenberg.org/ebooks/135.txt.utf-8<br />
# zfs snapshot zpool/docs@002<br />
<br />
Generate the new list to see how the space has changed:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
Here we can see that the 001 snapshot takes up 25.3K of metadata and still points to the original 4.92M of data, and the new snapshot takes-up no space and refers to a total of 8.17M.<br />
<br />
=== Time +3 ===<br />
Now let's simulate an accidental overwrite of a file and subsequent data loss:<br />
$ echo "this book sucks" > /zpool/docs/War_and_Peace.txt<br />
<br />
Again, take another snapshot:<br />
# zfs snapshot zpool/docs@003<br />
<br />
Now list out the snapshots and notice the amount of referred to decreased by about 3.1M:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 0 - 5.04M -<br />
</nowiki>}}<br />
<br />
We can easily recover from this situation by looking inside one or both of our older snapshots for good copy of the file. ZFS stores its snapshots in a hidden directory under the zpool: {{ic|/zpool/files/.zfs/snapshot}}:<br />
{{hc|$ ls -l /zpool/docs/.zfs/snapshot|<nowiki><br />
total 0<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 001<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 002<br />
dr-xr-xr-x 1 root root 0 Oct 20 16:09 003<br />
</nowiki>}}<br />
<br />
We can copy a good version of the book back out from any of our snapshots to any location on or off the zpool:<br />
% cp /zpool/docs/.zfs/snapshot/002/War_and_Peace.txt /zpool/docs<br />
{{Note|Using <TAB> for autocompletion will not work by default but can be changed by modifying the ''snapdir'' property on the pool or dataset.}}<br />
<br />
# zfs set snapdir=visible zpool/docs<br />
<br />
Now enter a snapshot dir or two:<br />
$ cd /zpool/docs/.zfs/snapshot/001<br />
$ cd /zpool/docs/.zfs/snapshot/002<br />
<br />
Repeat the df command:<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
zpool/docs@001 4.0G 4.9M 4.0G 1% /zpool/docs/.zfs/snapshot/001<br />
zpool/docs@002 4.0G 8.2M 4.0G 1% /zpool/docs/.zfs/snapshot/002<br />
<br />
{{Note|Seeing each dir under .zfs the user enters is reversible if the zpool is taken offline and then remounted or if the server is rebooted.}}<br />
<br />
For example:<br />
# zpool export zpool<br />
# zpool import -d /scratch/ zpool<br />
$ df -h | grep zpool<br />
zpool 4.0G 0 4.0G 0% /zpool<br />
zpool/docs 4.0G 5.0M 4.0G 1% /zpool/docs<br />
<br />
=== Time +4 ===<br />
Now that everything is back to normal, we can create another snapshot of this state:<br />
# zfs snapshot zpool/docs@004<br />
<br />
And the list now becomes:<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@001 25.3K - 4.92M -<br />
zpool/docs@002 25.5K - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}<br />
<br />
=== Deleting Snapshots ===<br />
The limit to the number of snapshots users can save is 2^64. User can delete a snapshot like so:<br />
# zfs destroy zpool/docs@001<br />
<br />
{{hc|# zfs list -t snapshot|<nowiki><br />
NAME USED AVAIL REFER MOUNTPOINT<br />
zpool/docs@002 3.28M - 8.17M -<br />
zpool/docs@003 155K - 5.04M -<br />
zpool/docs@004 0 - 8.17M -<br />
</nowiki>}}</div>Demizerhttps://wiki.archlinux.org/index.php?title=Install_Arch_Linux_on_ZFS&diff=311507Install Arch Linux on ZFS2014-04-22T23:32:17Z<p>Demizer: Remove ZFS installation link.</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
This article details the steps required to install Arch Linux onto a root ZFS filesystem. This article supplements the [[Beginners' guide]].<br />
<br />
== Installing archzfs ==<br />
<br />
Using the archzfs repository is highly recommended for effortless updates.<br />
<br />
{{Warning|The ZFS packages are tied to the kernel version they were built against. This means it will not be possible to perform kernel updates until new packages (or package sources) are released by the ZFS package maintainer.}}<br />
<br />
{{Note|1=This guide uses the unofficial archzfs repository hosted at http://demizerone.com/demz-repo-core. This repository is maintained by Jesus Alvarez and is signed with his PGP key: [http://pgp.mit.edu:11371/pks/lookup?op=vindex&search=0x5E1ABF240EE7A126 0EE7A126].}}<br />
<br />
=== Embedding archzfs into archiso ===<br />
<br />
See [[ZFS#Embed_the_archzfs_packages_into_an_archiso|ZFS]] article.<br />
<br />
=== Using the archzfs repository ===<br />
<br />
{{Merge|Unofficial user repositories|See [[Help:Style#Unofficial repositories]] for details.}}<br />
<br />
Activate the required network connection and then edit {{ic|/etc/pacman.d/mirrorlist}} and configure the mirrors for pacman to use. Once that is done, edit {{ic|/etc/pacman.conf}} and add the archzfs repository:<br />
<br />
[demz-repo-core]<br />
Server = http://demizerone.com/$repo/$arch<br />
<br />
{{Note|You should change the repo name from 'demz-repo-core' to 'demz-repo-archiso' if you are using the standard Arch ISOs to install (did not build your own, above).}}<br />
<br />
Next, add the archzfs maintainer's PGP key to the local trust:<br />
<br />
# pacman-key -r 0EE7A126<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
{{Note|1=The repository maintainer is recovering from surgery and has temporarily handed publish control to someone else so you may need to use that key instead: 5EE46C4C [https://aur.archlinux.org/packages/zfs/?comments=all]}}<br />
<br />
Finally, update the pacman databases and install ''archzfs'':<br />
<br />
# pacman -Syy archzfs<br />
<br />
{{Tip|This is also the best time to install your favorite text editor (otherwise {{Pkg|nano}} or {{Pkg|vi}} will have to be used) and the proper partition tools: for [[UEFI]] and [[GPT]] install {{Pkg|dosfstools}} and {{Pkg|gptfdisk}}.}}<br />
<br />
== Partition the destination drive ==<br />
<br />
Review [[Beginners' guide#Prepare_the_storage_drive]] for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.<br />
<br />
ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type {{ic|bf00}}, or "Solaris Root".<br />
<br />
=== Partition scheme ===<br />
<br />
Here is an example, using MBR, of a basic partition scheme that could be employed for your ZFS root setup:<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
Here is an example using GPT. The BIOS boot partition contains the bootloader.<br />
<br />
{{bc|<nowiki><br />
Part Size Type<br />
---- ---- -------------------------<br />
1 2M BIOS boot partition (ef02)<br />
1 512M Ext boot partition (8300)<br />
2 XXXG Solaris Root (bf00)</nowiki><br />
}}<br />
<br />
An additional partition may be required depending on your hardware and chosen bootloader. Consult [[Beginners' guide#Install_and_configure_a_bootloader]] for more info.<br />
<br />
{{Tip|Bootloaders with support for ZFS are described in [[#Install and configure the bootloader]].}}<br />
<br />
== Format the destination disk ==<br />
<br />
Format the boot partition as well as any other system partitions. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.<br />
<br />
== Setup the ZFS filesystem ==<br />
<br />
First, make sure the ZFS modules are loaded,<br />
<br />
# modprobe zfs<br />
<br />
=== Create the root zpool ===<br />
<br />
# zpool create zroot /dev/disk/by-id/''id-to-partition''<br />
<br />
{{Warning|Always use id names when working with ZFS, otherwise import errors will occur.}}<br />
<br />
=== Create necessary filesystems ===<br />
<br />
If so desired, sub-filesystem mount points such as {{ic|/home}} and {{ic|/root}} can be created with the following commands:<br />
<br />
# zfs create zroot/home -o mountpoint=/home<br />
# zfs create zroot/root -o mountpoint=/root<br />
<br />
Note that if you want to use other datasets for system directories ({{ic|/var}} or {{ic|/etc}} included) your system will not boot unless they are listed in {{ic|/etc/fstab}}! We will address that at the appropriate time in this tutorial.<br />
<br />
=== Swap partition ===<br />
<br />
ZFS does not allow the use swapfiles, but it is possible to use a ZFS volume as swap partition. It is important to set the ZVOL block size to match the system page size; for x86_64 systems that is 4k.<br />
<br />
Create a 8 GB (or whatever is required) ZFS volume:<br />
<br />
# zfs create -V 8G -b 4K zroot/swap<br />
<br />
Initialize and enable the volume as a swap partition:<br />
<br />
# mkswap /dev/zvol/zroot/swap<br />
# swapon /dev/zvol/zroot/swap<br />
<br />
After using {{ic|pacstrap}} to install the base system, edit {{ic|/zroot/etc/fstab}} to ensure the swap partition is mounted at boot:<br />
<br />
/dev/zvol/zroot/swap none swap defaults 0 0<br />
<br />
Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:<br />
<br />
# zfs umount -a<br />
<br />
=== Configure the root filesystem ===<br />
<br />
First, set the mount point of the root filesystem:<br />
<br />
# zfs set mountpoint=/ zroot<br />
<br />
and optionally, any sub-filesystems:<br />
<br />
# zfs set mountpoint=/home zroot/home<br />
# zfs set mountpoint=/root zroot/root<br />
<br />
and if you have seperate datasets for system directories (ie {{ic|/var}} or {{ic|/usr}})<br />
<br />
# zfs set mountpoint=legacy zroot/usr<br />
# zfs set mountpoint=legacy zroot/var<br />
<br />
and put them in {{ic|/etc/fstab}}<br />
{{hc|/etc/fstab|<br />
# <file system> <dir> <type> <options> <dump> <pass><br />
zroot/usr /usr zfs defaults,noatime 0 0<br />
zroot/var /var zfs defaults,noatime 0 0}}<br />
<br />
Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.<br />
<br />
# zpool set bootfs=zroot zroot<br />
<br />
Export the pool,<br />
<br />
# zpool export zroot<br />
<br />
{{Warning|Do not skip this, otherwise you will be required to use {{ic|-f}} when importing your pools. This unloads the imported pool.}}<br />
{{Note|This might fail if you added a swap partition above. Need to turn it off with the ''swapoff'' command.}}<br />
<br />
Finally, re-import the pool,<br />
<br />
# zpool import -d /dev/disk/by-id -R /mnt zroot<br />
<br />
{{Note|{{ic|-d}} is not the actual device id, but the {{ic|/dev/by-id}} directory containing the symbolic links.}}<br />
<br />
If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.<br />
<br />
Be sure to bring the zpool.cache file into your new system. This is required later for the ZFS daemon to start.<br />
<br />
# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache<br />
<br />
if you don't have /etc/zfs/zpool.cache, create it:<br />
<br />
# zpool set cachefile=/etc/zfs/zpool.cache zroot<br />
<br />
== Install and configure Arch Linux ==<br />
<br />
Follow the following steps using the [[Beginners' guide]]. It will be noted where special consideration must be taken for ZFSonLinux.<br />
<br />
* First mount any boot or system partitions using the mount command.<br />
<br />
* Install the base system.<br />
<br />
* The procedure described in [[Beginners' guide#Generate an fstab]] is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in {{ic|fstab}} file, unless the user made datasets of system directories. To generate the {{ic|fstab}} for filesystems, use:<br />
# genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab<br />
<br />
* Edit the {{ic|/etc/fstab}}:<br />
<br />
{{Note|<br />
* If you chose to create datasets for system directories, keep them in this {{ic|fstab}}! Comment out the lines for the '{{ic|/}}, {{ic|/root}}, and {{ic|/home}} mountpoints, rather than deleting them. You may need those UUIDs later if something goes wrong.<br />
* Anyone who just stuck with the guide's directions can delete everything except for the swap file and the boot/EFI partition. It seems convention to replace the swap's uuid with {{ic|/dev/zvol/zroot/swap}}.<br />
}}<br />
<br />
* When creating the initial ramdisk, first edit {{ic|/etc/mkinitcpio.conf}} and add {{ic|zfs}} before filesystems. Also, move {{ic|keyboard}} hook before {{ic|zfs}} so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your {{ic|HOOKS}} line should look something like this:<br />
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"<br />
<br />
* Regenerate the initramfs with the command:<br />
# mkinitcpio -p linux<br />
<br />
== Install and configure the bootloader ==<br />
<br />
=== For BIOS motherboards ===<br />
<br />
Follow [[GRUB#BIOS_systems_2]] to install GRUB onto your disk. {{ic|grub-mkconfig}} does not properly detect the ZFS filesystem, so it is necessary to edit {{ic|grub.cfg}} manually:<br />
<br />
{{hc|/boot/grub/grub.cfg|<nowiki><br />
set timeout=2<br />
set default=0<br />
<br />
# (0) Arch Linux<br />
menuentry "Arch Linux" {<br />
set root=(hd0,msdos1)<br />
linux /vmlinuz-linux zfs=zroot rw<br />
initrd /initramfs-linux.img<br />
}<br />
</nowiki>}}<br />
<br />
if you did not create a separate /boot participation, kernel and initrd paths have to be in the following format:<br />
<br />
/dataset/@/actual/path <br />
<br />
Example:<br />
<br />
linux /@/boot/vmlinuz-linux zfs=zroot rw<br />
initrd /@/boot/initramfs-linux.img<br />
=== For UEFI motherboards ===<br />
<br />
Use {{ic|EFISTUB}} and {{ic|rEFInd}} for the UEFI boot loader. See [[Beginners' guide#For UEFI motherboards]]. The kernel parameters in {{ic|refind_linux.conf}} for ZFS should include {{ic|1=zfs=bootfs}} or {{ic|1=zfs=zroot}} so the system can boot from ZFS. The {{ic|root}} and {{ic|rootfstype}} parameters are not needed.<br />
<br />
== Unmount and restart ==<br />
<br />
We are almost done!<br />
# exit<br />
# umount /mnt/boot<br />
# zfs umount -a<br />
# zpool export zroot<br />
Now reboot.<br />
<br />
{{Warning|If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.}}<br />
<br />
== After the first boot ==<br />
<br />
If everything went fine up to this point, your system will boot. Once.<br />
For your system to be able to reboot without issues, you need to enable the {{ic|zfs}} service and set the hostid.<br />
<br />
When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding {{ic|<nowiki>spl.spl_hostid=0x00bab10c</nowiki>}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# hostid > /etc/hostid<br />
# mkinitcpio -p linux<br />
<br />
Your system should work and reboot properly now.<br />
<br />
== See also ==<br />
<br />
* [https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem HOWTO install Ubuntu to a Native ZFS Root]<br />
* [http://lildude.co.uk/zfs-cheatsheet ZFS cheatsheet]<br />
* [http://www.funtoo.org/wiki/ZFS_Install_Guide Funtoo ZFS install guide]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS_on_FUSE&diff=311506ZFS on FUSE2014-04-22T23:31:45Z<p>Demizer: Remove ZFS installation link.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related articles end}}<br />
{{Accuracy|Be aware that this package is still using systemv init scripts.}}<br />
ZFS on FUSE/Linux is a project bringing the ZFS file system to Linux. Due to incompatible license with GPL it is argued that it cannot exist as a direct kernel module, however, this does not apply for implementation as a FUSE file system.<br />
<br />
Some of the capabilities of version 0.7.0 (as of Feb 2012) are limited compared to original implementation (incomplete list):<br />
<br />
* sharenfs uses different syntax than on Solaris;<br />
* it is possible to create snapshots, however, they have to be cloned to another disk to actually be able to browse the file system (due to missing .zfs special directory).<br />
<br />
== Installation ==<br />
<br />
[[pacman|Install]] {{AUR|zfs-fuse}} from the [[AUR]].<br />
<br />
Read the messages after installation and be sure to edit the configuration files as per your needs.<br />
<br />
Further, make sure that fuse module is loaded (as root):<br />
<br />
# modprobe fuse<br />
<br />
Start zfs-fuse daemon:<br />
<br />
# rc.d start zfs-fuse<br />
<br />
You will want to add "fuse" module to MODULES array in /etc/rc.conf and the zfs-fuse to the DAEMONS array to have it started after reboot.<br />
<br />
== Usage ==<br />
<br />
=== Quick setup guide ===<br />
<br />
Search on Google how ZFS works. Be sure, which block device you specify and better backup before continueing!<br />
<br />
Briefly, you will want to create a pool (as root): <br />
{{Warning|Always try to use id names instead when working with ZFS, otherwise import errors will occur.}}<br />
# zpool create mypool /dev/disk/by-id/''id-to-partition''<br />
Alternatively:<br />
# zpool create mypool /dev/sdb<br />
<br />
This will create a "pool" called "mypool" on physical block device {{ic|/dev/sdb}} (on whole disk, not on a single partition). Also, a first dataset (aka zfs file system) with the same name will be created and automatically mounted to {{ic|/mypool}}.<br />
<br />
You can create other datasets (file systems) within the pool. The reason for doing so is to be able to set various properties on them, to be able to create snapshots independently, etc.<br />
<br />
# zfs create mypool/my1stdataset <br />
<br />
Note that "mypool" is a reference to an existing pool, not to the mount point (which is {{ic|/mypool}} at the moment).<br />
<br />
==== Automatically mount pools and datasets on boot ====<br />
<br />
Pools and datasets can be mounted during boot by adding them to the arrays in {{ic|/etc/conf.d/zfs-fuse}}.<br />
<br />
If you like to have all pools and all datasets mounted you can add a {{ic|-a}} in the array:<br />
<br />
ZFS_IMPORT=("-a")<br />
<br />
If you like to have all datasets mounted add {{ic|-a}} in that array as well:<br />
<br />
ZFS_MOUNT=("-a")<br />
<br />
=== NFS shares ===<br />
<br />
ZFS can export datasets (file systems) as NFS without the need to place the directories in {{ic|/etc/exports}}.<br />
<br />
The nfs-kernel daemon should be installed and started:<br />
<br />
# pacman -S nfs-utils<br />
# rc.d start nfs-kernel<br />
<br />
Add the nfs-server to DAEMONS array in {{ic|/etc/rc.conf}} well before the nfs-fuse.<br />
<br />
The syntax of setting the NFS share is:<br />
<br />
# zfs set sharenfs="host1:option1,option2,option3 host2:option1,...,optionN" dataset_name<br />
# zfs set sharenfs="192.168.1.1/24:ro 192.168.1.3:rw" mypool/my1stdataset<br />
<br />
The exported shares and their options can be listed as follows:<br />
<br />
# cat /var/lib/nfs/etab<br />
<br />
{{Note|<br />
* It does not work to use only {{ic|zfs set sharenfs&#61;on mypool/my1stdataset}} - it is not possible to mount such a share from other computers.<br />
* Only NFS version 3 seems to be supported for readonly ({{ic|ro}}) shares (use {{ic|mount servername:/mypool/my1stdataset /mnt -o vers&#613,defaults}} to force that version)<br />
* It does not work to use "*" as shortcut for any host name. Use the actual IP address/mask of NFS clients list.<br />
* Running subsequent {{ic|zfs set sharenfs&#61;"10.1.0.3:rw" mypool/my1stdataset}} will ''add'' the extra host to the existing exports.<br />
* To remove existing exports, run {{ic|zfs set sharenfs&#61;off mypool/my1stdataset}}.<br />
* To enable sharing after reboot, put {{ic|zfs share -a" to your /etc/rc.local}}.<br />
* It is possible to use also the regular way of exporting NFS shares via {{ic|/etc/exports}}, however, do not set it up for the shares, which are exported by ZFS directly!<br />
}}</div>Demizerhttps://wiki.archlinux.org/index.php?title=Talk:ZFS&diff=311505Talk:ZFS2014-04-22T23:27:02Z<p>Demizer: /* Rename Playing with ZFS */ new section</p>
<hr />
<div>== Merging with Installing Arch Linux on ZFS? ==<br />
<br />
<s>Does it make sense to keep two different articles? What about just add the sections about how-to add zfs packages to the installation media and the trouble shooting here?<br />
<br />
:Interesting. I think it could work, but I am not sure where to place it. The installing on zfs article is very long. Do you have any ideas? [[User:Demizer|Demizer]] ([[User talk:Demizer|talk]]) 03:24, 2 March 2013 (UTC)<br />
<br />
::I think most of the other article is actually wrongly placed. The information about the partition, formatting, and mounting are covered in the general installation guide, I am using zfs in a bios and efi system and I ignored it. What is really important is: adding zfs packages to the installation media, creating the pool and the fs for installation, and the kernel line to mount zfs root.</s><br />
<br />
==Max size of ZFS==<br />
On wikipedia ( http://en.wikipedia.org/wiki/ZFS )it is claimed that <br />
<br />
A ZFS file system can store up to 256 quadrillion zettabytes (ZB)<br />
<br />
This means to my understanding 256000 quadrillion exabyte. Here the claim is more moderate : 16 Exabyte. I suppose the difference comes from some 64/128 bit confusion and is of course more of theoretical interest.<br />
[[User:Michaelcochez|Michaelcochez]] ([[User talk:Michaelcochez|talk]]) 12:04, 15 July 2013 (UTC)<br />
<br />
::They aren't conflicting claims, they claim different things. Both the maximum file size on ZFS and the maximum volume size is 16EB, but the filesystem itself can be up to 256 quadrillion zettabytes. [[User:Kyrias|Kyrias]] ([[User talk:Kyrias|talk]]) 17:18, 21 August 2013 (UTC)<br />
<br />
== Rename Playing with ZFS ==<br />
<br />
I think "Experimenting with ZFS" would be a more suitable name. [[User:Demizer|Demizer]] ([[User talk:Demizer|talk]]) 23:27, 22 April 2014 (UTC)</div>Demizerhttps://wiki.archlinux.org/index.php?title=Talk:ZFS&diff=311504Talk:ZFS2014-04-22T23:26:12Z<p>Demizer: /* Merging with Installing Arch Linux on ZFS? */</p>
<hr />
<div>== Merging with Installing Arch Linux on ZFS? ==<br />
<br />
<s>Does it make sense to keep two different articles? What about just add the sections about how-to add zfs packages to the installation media and the trouble shooting here?<br />
<br />
:Interesting. I think it could work, but I am not sure where to place it. The installing on zfs article is very long. Do you have any ideas? [[User:Demizer|Demizer]] ([[User talk:Demizer|talk]]) 03:24, 2 March 2013 (UTC)<br />
<br />
::I think most of the other article is actually wrongly placed. The information about the partition, formatting, and mounting are covered in the general installation guide, I am using zfs in a bios and efi system and I ignored it. What is really important is: adding zfs packages to the installation media, creating the pool and the fs for installation, and the kernel line to mount zfs root.</s><br />
<br />
==Max size of ZFS==<br />
On wikipedia ( http://en.wikipedia.org/wiki/ZFS )it is claimed that <br />
<br />
A ZFS file system can store up to 256 quadrillion zettabytes (ZB)<br />
<br />
This means to my understanding 256000 quadrillion exabyte. Here the claim is more moderate : 16 Exabyte. I suppose the difference comes from some 64/128 bit confusion and is of course more of theoretical interest.<br />
[[User:Michaelcochez|Michaelcochez]] ([[User talk:Michaelcochez|talk]]) 12:04, 15 July 2013 (UTC)<br />
<br />
::They aren't conflicting claims, they claim different things. Both the maximum file size on ZFS and the maximum volume size is 16EB, but the filesystem itself can be up to 256 quadrillion zettabytes. [[User:Kyrias|Kyrias]] ([[User talk:Kyrias|talk]]) 17:18, 21 August 2013 (UTC)</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=311503ZFS2014-04-22T23:22:01Z<p>Demizer: Move Playing with ZFS section.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|spl-git}} as a dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Playing with ZFS==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Playing_with_ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
Although many knobs are available on a zfs pool, there are two major ones user can consider:<br />
*atime<br />
*compression<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS_Installation&diff=311502ZFS Installation2014-04-22T23:17:53Z<p>Demizer: Article merged into main ZFS article.</p>
<hr />
<div></div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=311501ZFS2014-04-22T23:16:22Z<p>Demizer: Remove ZFS installation link.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Playing with ZFS==<br />
The rest of this article cover basic setup and usage of ZFS on physical block devices (HDD and SSD for example). Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like ~/zfs0.img ~/zfs1.img ~/zfs2.img etc. with no possibility of real data loss are encouraged to see the [[Playing_with_ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|spl-git}} as a dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
Although many knobs are available on a zfs pool, there are two major ones user can consider:<br />
*atime<br />
*compression<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=311500ZFS2014-04-22T23:15:03Z<p>Demizer: Merge Installing ZFS article into installation section.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|ZFS Installation}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Playing with ZFS==<br />
The rest of this article cover basic setup and usage of ZFS on physical block devices (HDD and SSD for example). Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like ~/zfs0.img ~/zfs1.img ~/zfs2.img etc. with no possibility of real data loss are encouraged to see the [[Playing_with_ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Installation==<br />
<br />
Install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository. This package has {{AUR|spl-git}} as a dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
{{Note|1=The zfs-git package replaces the original zfs package from [[AUR]]. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.}}<br />
<br />
For users that desire ZFS builds from stable releases, {{AUR|zfs-lts}} is available from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.<br />
<br />
{{warning|The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#demz-repo-core|demz-repo-core]] repository.}}<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Archiso ===<br />
<br />
For installing Arch Linux into a ZFS root filesystem, install {{AUR|zfs-git}} from the [[Arch User Repository]] or the [[Unofficial user repositories#demz-repo-archiso|demz-repo-archiso]] repository.<br />
<br />
See [[Installing Arch Linux on ZFS]] for more information.<br />
<br />
=== Automated build script ===<br />
<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
Although many knobs are available on a zfs pool, there are two major ones user can consider:<br />
*atime<br />
*compression<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=311371ZFS2014-04-21T15:24:29Z<p>Demizer: /* Emergency chroot repair with archzfs */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|ZFS Installation}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Playing with ZFS==<br />
The rest of this article cover basic setup and usage of ZFS on physical block devices (HDD and SSD for example). Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like ~/zfs0.img ~/zfs1.img ~/zfs2.img etc. with no possibility of real data loss are encouraged to see the [[Playing_with_ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Installation==<br />
The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS Installation]] article. See [[Installing Arch Linux on ZFS]] for installing the (root) system on it.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
Although many knobs are available on a zfs pool, there are two major ones user can consider:<br />
*atime<br />
*compression<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs-git<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=311370ZFS2014-04-21T15:23:29Z<p>Demizer: /* does not contain an EFI label */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|ZFS Installation}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Playing with ZFS==<br />
The rest of this article cover basic setup and usage of ZFS on physical block devices (HDD and SSD for example). Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like ~/zfs0.img ~/zfs1.img ~/zfs2.img etc. with no possibility of real data loss are encouraged to see the [[Playing_with_ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Installation==<br />
The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS Installation]] article. See [[Installing Arch Linux on ZFS]] for installing the (root) system on it.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
Although many knobs are available on a zfs pool, there are two major ones user can consider:<br />
*atime<br />
*compression<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
SigLevel = Required<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS&diff=311369ZFS2014-04-21T15:23:07Z<p>Demizer: Add ARC troubleshooting information.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|ZFS Installation}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum [[Wikipedia:Zettabyte|256 Zettabyte]] volume size. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
==Playing with ZFS==<br />
The rest of this article cover basic setup and usage of ZFS on physical block devices (HDD and SSD for example). Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like ~/zfs0.img ~/zfs1.img ~/zfs2.img etc. with no possibility of real data loss are encouraged to see the [[Playing_with_ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
==Installation==<br />
The requisite packages are available in the AUR and in an unofficial repo. Details are provided on the [[ZFS Installation]] article. See [[Installing Arch Linux on ZFS]] for installing the (root) system on it.<br />
<br />
==Configuration==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
{{Note|The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.}}<br />
<br />
===Automatic Start===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
==Systemd==<br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
==Create a storage pool==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The [http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool zfs on Linux developers recommend] using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:<br />
<br />
$ ls -lah /dev/disk/by-id/<br />
<br />
The ids should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> raidz <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, than the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz''': This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See [https://blogs.oracle.com/bonwick/entry/raid_z Jeff Bonwick's Blog -- RAID-Z] for more information about raidz.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Here is an example for the full command:<br />
<br />
# zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the {{ic|<nowiki>ashift=12</nowiki>}} option should be used (See the [http://zfsonlinux.org/faq.html#HowDoesZFSonLinuxHandlesAdvacedFormatDrives ZFS on Linux FAQ]). The full command would in this case be:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
== Tuning ==<br />
Although many knobs are available on a zfs pool, there are two major ones user can consider:<br />
*atime<br />
*compression<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:<br />
# zfs set compression=lz4 <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# sudo zfs get all <pool><br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Scrub ===<br />
<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Export a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool,<br />
<br />
# zpool export bigdata<br />
<br />
=== Rename a Zpool ===<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a Different Mount Point ===<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the <tt>getconf PAGESIZE</tt> command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.<br />
<br />
Create a 8GiB zfs volume:<br />
<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o primarycache=metadata \<br />
-o sync=always \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ==== <br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[Arch User Repository|AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set <tt>com.sun:auto-snapshot=false</tt> on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set <tt>com.sun:auto-snapshot:monthly=false</tt>.<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[Arch User Repository|AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [http://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son "Grandfather-father-son"] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
==Troubleshooting==<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use <code>-f</code> with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding <code>spl.spl_hostid=0x00bab10c</code>.<br />
<br />
The other solution is to make sure that there is a hostid in <code>/etc/hostid</code>, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
{{bc|# zpool import -a -f}}<br />
<br />
now export the pool:<br />
<br />
{{bc|# zpool export <pool>}}<br />
<br />
To see the available pools, use,<br />
<br />
{{bc|# zpool status}}<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
{{bc|# mkinitcpio -p linux}}<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required. <br />
<br />
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.<br />
<br />
# pacman -S archiso<br />
<br />
Start the process: <br />
# cp -r /usr/share/archiso/configs/releng /root/media<br />
<br />
Edit the {{ic|packages.x86_64}} file adding those lines:<br />
spl-utils-git<br />
spl-git<br />
zfs-utils-git<br />
zfs-git<br />
<br />
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):<br />
[demz-repo-archiso]<br />
SigLevel = Never<br />
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki><br />
<br />
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.<br />
# ./build.sh -v<br />
<br />
The image will be in the {{ic|/root/media/out}} directory.<br />
<br />
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.<br />
<br />
If installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.<br />
<br />
=== Encryption in ZFS on linux ===<br />
<br />
ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text<br />
abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to<br />
point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there.<br />
Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection<br />
might be partially lost.<br />
<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
Here is how to use the archiso to get into the ZFS filesystem for maintenance.<br />
<br />
Boot the latest archiso and bring up the network:<br />
<br />
# wifi-menu<br />
# ip link set eth0 up<br />
<br />
Test the network connection:<br />
<br />
# ping google.com<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
(optional) Install a text editor:<br />
<br />
# pacman -S vim<br />
<br />
Add archzfs archiso repository to {{ic|pacman.conf}}:<br />
<br />
{{hc|/etc/pacman.conf|<nowiki><br />
[demz-repo-archiso]<br />
SigLevel = Required<br />
Server = http://demizerone.com/$repo/$arch</nowiki>}}<br />
<br />
Sync the pacman package database:<br />
<br />
# pacman -Syy<br />
<br />
Add the archzfs maintainer's PGP key to the local (installer image) trust:<br />
# pacman-key --lsign-key 0EE7A126<br />
<br />
Install the ZFS package group:<br />
<br />
# pacman -S archzfs<br />
<br />
Load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
== See also ==<br />
<br />
* [[Installing Arch Linux on ZFS]]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [http://zfsonlinux.org/faq.html ZFS on Linux FAQ]<br />
* [http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html FreeBSD Handbook -- The Z File System]<br />
* [http://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ Pingdom details how it backs up 5TB of MySQL data every day with ZFS]<br />
<br />
; Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.<br />
# [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ VDEVs]<br />
# [https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ RAIDZ Levels]<br />
# [https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ The ZFS Intent Log]<br />
# [https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/ The ARC]<br />
# [https://pthree.org/2012/12/10/zfs-administration-part-v-exporting-and-importing-zpools/ Import/export zpools]<br />
# [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ Scrub and Resilver]<br />
# [https://pthree.org/2012/12/12/zfs-administration-part-vii-zpool-properties/ Zpool Properties]<br />
# [https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Zpool Best Practices]<br />
# [https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/ Copy on Write]<br />
# [https://pthree.org/2012/12/17/zfs-administration-part-x-creating-filesystems/ Creating Filesystems]<br />
# [https://pthree.org/2012/12/18/zfs-administration-part-xi-compression-and-deduplication/ Compression and Deduplication]<br />
# [https://pthree.org/2012/12/19/zfs-administration-part-xii-snapshots-and-clones/ Snapshots and Clones]<br />
# [https://pthree.org/2012/12/20/zfs-administration-part-xiii-sending-and-receiving-filesystems/ Send/receive Filesystems]<br />
# [https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/ ZVOLs]<br />
# [https://pthree.org/2012/12/31/zfs-administration-part-xv-iscsi-nfs-and-samba/ iSCSI, NFS, and Samba]<br />
# [https://pthree.org/2013/01/02/zfs-administration-part-xvi-getting-and-setting-properties/ Get/Set Properties]<br />
# [https://pthree.org/2013/01/03/zfs-administration-part-xvii-best-practices-and-caveats/ ZFS Best Practices]</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS_Installation&diff=311355ZFS Installation2014-04-21T15:06:17Z<p>Demizer: /* Archiso Tracking Repository */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article provides several options for the installation of the requisite ZFS software packages.<br />
<br />
==Building from AUR==<br />
[[AUR]] contains {{AUR|zfs-git}} and {{AUR|zfs-lts}} packages for building ZFSonLinux. The git packages are required if zfs is to be used with the default kernel. ZFSonLinux.org is slow to make official releases, but kernel API changes are made often that require up-to-date ZFS sources from github to build successfully.<br />
<br />
The ZFS kernel module and related utils are available in the [[AUR]]; all are required:<br />
<br />
*{{AUR|spl-utils-git}}<br />
*{{AUR|spl-git}}<br />
*{{AUR|zfs-utils-git}}<br />
*{{AUR|zfs-git}}<br />
<br />
For users that are concerned with using git based packages, the {{AUR|zfs-lts}} packages are available in [[AUR]] that use the sources officially released by ZFSonLinux.org:<br />
<br />
*{{AUR|spl-utils-lts}}<br />
*{{AUR|spl-lts}}<br />
*{{AUR|zfs-utils-lts}}<br />
*{{AUR|zfs-lts}}<br />
<br />
{{note|The ZFS and SPL (Solaris Porting Layer is a Linux kernel module which provides many of the Solaris kernel APIs) kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the archzfs repository.}}<br />
<br />
=== Automated build script ===<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Unofficial repository==<br />
<br />
Pre-built packages made available by the {{AUR|zfs-git}} and {{AUR|zfs-lts}} package maintainer are available in the [[Unofficial user repositories#demz-repo-core | demz-repo-core]] signed repository.<br />
<br />
To use this repository, the signing key must be imported into the host and locally signed. See [[Pacman-key#Adding_unofficial_keys]]. Once this is done, it is now possible to update the package database and install ZFS packages:<br />
<br />
# pacman -Sy archzfs-git<br />
<br />
or<br />
<br />
# pacman -Sy archzfs-lts<br />
<br />
===Archiso Tracking Repository===<br />
<br />
ZFS can easily be used from within the archiso live environment by using the special archiso tracking repository for ZFS. This repository makes it easy to install Arch Linux on a root ZFS filesystem, or to mount ZFS pools from within an archiso live environment using an up-to-date live medium. The details for using this repository from a live environment are given [[Unofficial user repositories#demz-repo-archiso | here]]<br />
<br />
This repository and packages are also signed, so the key must be locally signed following the steps listed in the previous section before use. For a guide on how to install Arch Linux on to a root ZFS filesystem, see [[Installing Arch Linux on ZFS]].<br />
<br />
When you have the above steps the process is as follows - do not use pacman -Syyu as this will require a bunch of signatures and build against a kernel that isn't running from the iso. Instead do:<br />
<br />
# pacman -Syy<br />
<br />
# pacman -S archzfs-git<br />
<br />
If this succeeded then running zfs status should give some output other than a kernel insmod error. You may then want to partition your drives using gdisk or something along those lines. If you previously partitioned your drives with zfs you're going to need to do a zfs upgrade. Make sure you have a snapshot backed up or a backup image if you have valuable data before doing this though.</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS_Installation&diff=311354ZFS Installation2014-04-21T15:05:19Z<p>Demizer: /* Unofficial repository */</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article provides several options for the installation of the requisite ZFS software packages.<br />
<br />
==Building from AUR==<br />
[[AUR]] contains {{AUR|zfs-git}} and {{AUR|zfs-lts}} packages for building ZFSonLinux. The git packages are required if zfs is to be used with the default kernel. ZFSonLinux.org is slow to make official releases, but kernel API changes are made often that require up-to-date ZFS sources from github to build successfully.<br />
<br />
The ZFS kernel module and related utils are available in the [[AUR]]; all are required:<br />
<br />
*{{AUR|spl-utils-git}}<br />
*{{AUR|spl-git}}<br />
*{{AUR|zfs-utils-git}}<br />
*{{AUR|zfs-git}}<br />
<br />
For users that are concerned with using git based packages, the {{AUR|zfs-lts}} packages are available in [[AUR]] that use the sources officially released by ZFSonLinux.org:<br />
<br />
*{{AUR|spl-utils-lts}}<br />
*{{AUR|spl-lts}}<br />
*{{AUR|zfs-utils-lts}}<br />
*{{AUR|zfs-lts}}<br />
<br />
{{note|The ZFS and SPL (Solaris Porting Layer is a Linux kernel module which provides many of the Solaris kernel APIs) kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the archzfs repository.}}<br />
<br />
=== Automated build script ===<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Unofficial repository==<br />
<br />
Pre-built packages made available by the {{AUR|zfs-git}} and {{AUR|zfs-lts}} package maintainer are available in the [[Unofficial user repositories#demz-repo-core | demz-repo-core]] signed repository.<br />
<br />
To use this repository, the signing key must be imported into the host and locally signed. See [[Pacman-key#Adding_unofficial_keys]]. Once this is done, it is now possible to update the package database and install ZFS packages:<br />
<br />
# pacman -Sy archzfs-git<br />
<br />
or<br />
<br />
# pacman -Sy archzfs-lts<br />
<br />
===Archiso Tracking Repository===<br />
<br />
ZFS can easily be used from within the archiso live environment by using the special archiso tracking repository for ZFS. This repository makes it easy to install Arch Linux on a root ZFS filesystem, or to mount ZFS pools from within an archiso live environment using an up-to-date live medium. The details for using this repository from a live environment are given [[Unofficial user repositories#demz-repo-archiso | here]]<br />
<br />
This repository and packages are also signed, so the key must be locally signed following the steps listed in the previous section before use. For a guide on how to install Arch Linux on to a root ZFS filesystem, see [[Installing Arch Linux on ZFS]].<br />
<br />
When you have the above steps the process is as follows - do not use pacman -Syyu as this will require a bunch of signatures and build against a kernel that isn't running from the iso. Instead do:<br />
<br />
# pacman -Syy<br />
<br />
# pacman -S archzfs<br />
<br />
If this succeeded then running zfs status should give some output other than a kernel insmod error. You may then want to partition your drives using gdisk or something along those lines. If you previously partitioned your drives with zfs you're going to need to do a zfs upgrade. Make sure you have a snapshot backed up or a backup image if you have valuable data before doing this though.</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS_Installation&diff=311351ZFS Installation2014-04-21T15:04:35Z<p>Demizer: Add -git to packages names.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article provides several options for the installation of the requisite ZFS software packages.<br />
<br />
==Building from AUR==<br />
[[AUR]] contains {{AUR|zfs-git}} and {{AUR|zfs-lts}} packages for building ZFSonLinux. The git packages are required if zfs is to be used with the default kernel. ZFSonLinux.org is slow to make official releases, but kernel API changes are made often that require up-to-date ZFS sources from github to build successfully.<br />
<br />
The ZFS kernel module and related utils are available in the [[AUR]]; all are required:<br />
<br />
*{{AUR|spl-utils-git}}<br />
*{{AUR|spl-git}}<br />
*{{AUR|zfs-utils-git}}<br />
*{{AUR|zfs-git}}<br />
<br />
For users that are concerned with using git based packages, the {{AUR|zfs-lts}} packages are available in [[AUR]] that use the sources officially released by ZFSonLinux.org:<br />
<br />
*{{AUR|spl-utils-lts}}<br />
*{{AUR|spl-lts}}<br />
*{{AUR|zfs-utils-lts}}<br />
*{{AUR|zfs-lts}}<br />
<br />
{{note|The ZFS and SPL (Solaris Porting Layer is a Linux kernel module which provides many of the Solaris kernel APIs) kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the archzfs repository.}}<br />
<br />
=== Automated build script ===<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Unofficial repository==<br />
<br />
Pre-built packages made available by the {{AUR|zfs-git}} and {{AUR|zfs-lts}} package maintainer are available in the [[Unofficial user repositories#demz-repo-core | demz-repo-core]] signed repository.<br />
<br />
To use this repository, the signing key must be imported into the host and locally signed. See [[Pacman-key#Adding_unofficial_keys]]. Once this is done, it is now possible to update the package database and install ZFS packages:<br />
<br />
# pacman -S archzfs-git<br />
<br />
or<br />
<br />
# pacman -S archzfs-lts<br />
<br />
===Archiso Tracking Repository===<br />
<br />
ZFS can easily be used from within the archiso live environment by using the special archiso tracking repository for ZFS. This repository makes it easy to install Arch Linux on a root ZFS filesystem, or to mount ZFS pools from within an archiso live environment using an up-to-date live medium. The details for using this repository from a live environment are given [[Unofficial user repositories#demz-repo-archiso | here]]<br />
<br />
This repository and packages are also signed, so the key must be locally signed following the steps listed in the previous section before use. For a guide on how to install Arch Linux on to a root ZFS filesystem, see [[Installing Arch Linux on ZFS]].<br />
<br />
When you have the above steps the process is as follows - do not use pacman -Syyu as this will require a bunch of signatures and build against a kernel that isn't running from the iso. Instead do:<br />
<br />
# pacman -Syy<br />
<br />
# pacman -S archzfs<br />
<br />
If this succeeded then running zfs status should give some output other than a kernel insmod error. You may then want to partition your drives using gdisk or something along those lines. If you previously partitioned your drives with zfs you're going to need to do a zfs upgrade. Make sure you have a snapshot backed up or a backup image if you have valuable data before doing this though.</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS_Installation&diff=311350ZFS Installation2014-04-21T15:03:27Z<p>Demizer: Update unofficial repository section.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article provides several options for the installation of the requisite ZFS software packages.<br />
<br />
==Building from AUR==<br />
[[AUR]] contains {{AUR|zfs-git}} and {{AUR|zfs-lts}} packages for building ZFSonLinux. The git packages are required if zfs is to be used with the default kernel. ZFSonLinux.org is slow to make official releases, but kernel API changes are made often that require up-to-date ZFS sources from github to build successfully.<br />
<br />
The ZFS kernel module and related utils are available in the [[AUR]]; all are required:<br />
<br />
*{{AUR|spl-utils-git}}<br />
*{{AUR|spl-git}}<br />
*{{AUR|zfs-utils-git}}<br />
*{{AUR|zfs-git}}<br />
<br />
For users that are concerned with using git based packages, the {{AUR|zfs-lts}} packages are available in [[AUR]] that use the sources officially released by ZFSonLinux.org:<br />
<br />
*{{AUR|spl-utils-lts}}<br />
*{{AUR|spl-lts}}<br />
*{{AUR|zfs-utils-lts}}<br />
*{{AUR|zfs-lts}}<br />
<br />
{{note|The ZFS and SPL (Solaris Porting Layer is a Linux kernel module which provides many of the Solaris kernel APIs) kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the archzfs repository.}}<br />
<br />
=== Automated build script ===<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils spl zfs-utils zfs; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils spl zfs-utils zfs; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Unofficial repository==<br />
<br />
Pre-built packages made available by the {{AUR|zfs-git}} and {{AUR|zfs-lts}} package maintainer are available in the [[Unofficial user repositories#demz-repo-core | demz-repo-core]] signed repository.<br />
<br />
To use this repository, the signing key must be imported into the host and locally signed. See [[Pacman-key#Adding_unofficial_keys]]. Once this is done, it is now possible to update the package database and install ZFS packages:<br />
<br />
# pacman -S archzfs-git<br />
<br />
or<br />
<br />
# pacman -S archzfs-lts<br />
<br />
===Archiso Tracking Repository===<br />
<br />
ZFS can easily be used from within the archiso live environment by using the special archiso tracking repository for ZFS. This repository makes it easy to install Arch Linux on a root ZFS filesystem, or to mount ZFS pools from within an archiso live environment using an up-to-date live medium. The details for using this repository from a live environment are given [[Unofficial user repositories#demz-repo-archiso | here]]<br />
<br />
This repository and packages are also signed, so the key must be locally signed following the steps listed in the previous section before use. For a guide on how to install Arch Linux on to a root ZFS filesystem, see [[Installing Arch Linux on ZFS]].<br />
<br />
When you have the above steps the process is as follows - do not use pacman -Syyu as this will require a bunch of signatures and build against a kernel that isn't running from the iso. Instead do:<br />
<br />
# pacman -Syy<br />
<br />
# pacman -S archzfs<br />
<br />
If this succeeded then running zfs status should give some output other than a kernel insmod error. You may then want to partition your drives using gdisk or something along those lines. If you previously partitioned your drives with zfs you're going to need to do a zfs upgrade. Make sure you have a snapshot backed up or a backup image if you have valuable data before doing this though.</div>Demizerhttps://wiki.archlinux.org/index.php?title=Unofficial_user_repositories&diff=311336Unofficial user repositories2014-04-21T14:55:07Z<p>Demizer: /* demz-repo-core */</p>
<hr />
<div>[[Category:Package management]]<br />
{{Related articles start}}<br />
{{Related|pacman-key}}<br />
{{Related|Official repositories}}<br />
{{Related articles end}} <br />
Because the AUR only allows users to upload PKGBUILD and other package build related files, but does not provide a means for distributing a binary package, a user may want to create a binary repository of their packages elsewhere. See [[Pacman Tips#Custom local repository]] for more information.<br />
<br />
If you have your own repository, please add it to this page, so that all the other users will know where to find your packages. Please keep the following rules when adding new repositories:<br />
<br />
* Keep the lists in alphabetical order.<br />
* Include some information about the maintainer: include at least a (nick)name and some form of contact information (web site, email address, user page on ArchWiki or the forums, etc.).<br />
* If the repository is of the ''signed'' variety, please include a key-id, possibly using it as the anchor for a link to its keyserver; if the key is not on a keyserver, include a link to the key file.<br />
* Include some short description (e.g. the category of packages provided in the repository).<br />
* If there is a page (either on ArchWiki or external) containing more information about the repository, include a link to it.<br />
* If possible, avoid using comments in code blocks. The formatted description is much more readable. Users who want some comments in their {{ic|pacman.conf}} can easily create it on their own.<br />
<br />
{{Note|If you are looking to add a signed repository to your {{ic|pacman.conf}}, you must be familiar with [[Pacman-key#Adding unofficial keys]].}}<br />
<br />
{{Expansion|Please fill in the missing information about maintainers.}}<br />
<br />
== Any ==<br />
<br />
"Any" repositories are architecture-independent. In other words, they can be used on both i686 and x86_64 systems.<br />
<br />
=== Signed ===<br />
<br />
==== bioinformatics-any ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some python packages and genome browser for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics-any]<br />
Server = http://decryptedepsilon.bl.ee/repo/any<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-fonts ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle-fonts repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-fonts]<br />
Server = http://bohoomil.com/repo/fonts<br />
</nowiki>}}<br />
<br />
==== xyne-any ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for "any" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{Note|Use this repository only if there is no matching {{ic|[xyne-*]}} repository for your architecture.}}<br />
<br />
{{bc|<nowiki><br />
[xyne-any]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== archlinuxgr-any ====<br />
* '''Maintainer:'''<br />
* '''Description:''' The Hellenic (Greek) unofficial Arch Linux repository with many interesting packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-any]<br />
Server = http://archlinuxgr.tiven.org/archlinux/any<br />
</nowiki>}}<br />
<br />
== Both i686 and x86_64 ==<br />
<br />
Repositories with both i686 and x86_64 versions. The {{ic|$arch}} variable will be set automatically by pacman.<br />
<br />
=== Signed ===<br />
<br />
==== arcanisrepo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#arcanis arcanis]<br />
* '''Description:''' A repository with some AUR packages including packages from VCS<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[arcanisrepo]<br />
Server = ftp://repo.arcanis.name/repo/$arch<br />
</nowiki>}}<br />
<br />
==== bbqlinux ====<br />
<br />
* '''Maintainer:''' [https://plus.google.com/u/0/+DanielHillenbrand/about Daniel Hillenbrand]<br />
* '''Description:''' Packages for Android Development<br />
* '''Upstream Page:''' http://bbqlinux.org/<br />
* '''Key-ID:''' Get the bbqlinux-keyring package, as it contains the needed keys.<br />
<br />
{{bc|<nowiki><br />
[bbqlinux]<br />
Server = http://packages.bbqlinux.org/$arch<br />
</nowiki>}}<br />
==== carstene1ns ====<br />
<br />
* '''Maintainer:''' [[User:Carstene1ns|Carsten Teibes]]<br />
* '''Description:''' AUR packages maintained and/or used by Carsten Teibes (games/Wii/lib32/Python)<br />
* '''Upstream page:''' http://arch.carsten-teibes.de (still under construction)<br />
* '''Key-ID:''' 2476B20B<br />
<br />
{{bc|<nowiki><br />
[carstene1ns]<br />
Server = http://repo.carsten-teibes.de/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0 | Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst]<br />
Server = http://catalyst.wirephire.com/repo/catalyst/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://70.239.162.206/catalyst-mirror/repo/catalyst/$arch<br />
#Server = http://mirror.rts-informatique.fr/archlinux-catalyst/repo/catalyst/$arch<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst-hd234k ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0 | Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst-hd234k]<br />
Server = http://catalyst.wirephire.com/repo/catalyst-hd234k/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://70.239.162.206/catalyst-mirror/repo/catalyst-hd234k/$arch<br />
#Server = http://mirror.rts-informatique.fr/archlinux-catalyst/repo/catalyst-hd234k/$arch<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst-hd234k/$arch<br />
</nowiki>}}<br />
<br />
==== city ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bgyorgy Balló György]<br />
* '''Description:''' Experimental/unpopular packages.<br />
* '''Upstream page:''' http://pkgbuild.com/~bgyorgy/city.html<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[city]<br />
Server = http://pkgbuild.com/~bgyorgy/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== crypto ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Includes tomb, tomb-git, and other related software.<br />
<br />
{{bc|<nowiki><br />
[crypto]<br />
Server = http://tomb.dyne.org/arch_repo/$arch<br />
</nowiki>}}<br />
<br />
==== demz-repo-core ====<br />
<br />
* '''Maintainer:''' [http://demizerone.com Jesus Alvarez (demizer)]<br />
* '''Description:''' Packages for ZFS on Arch Linux.<br />
* '''Upstream page:''' http://demizerone.com/archzfs<br />
* '''Key-ID:''' 0EE7A126<br />
<br />
{{bc|<nowiki><br />
[demz-repo-core]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== demz-repo-archiso ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Packages for installing ZFSfrom an Arch ISO live disk<br />
* '''Upstream page:''' http://demizerone.com/archzfs<br />
* '''Key-ID:''' 0EE7A126 (temporarily using key 5EE46C4C [https://aur.archlinux.org/packages/zfs/?comments=all])<br />
<br />
{{bc|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== infinality-bundle ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle main repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle]<br />
Server = http://bohoomil.com/repo/$arch<br />
</nowiki>}}<br />
<br />
==== metalgamer ====<br />
<br />
* '''Maintainer:''' [http://metalgamer.eu/ metalgamer]<br />
* '''Description:''' Packages I use and/or maintain on the AUR.<br />
* '''Key ID:''' F55313FB<br />
<br />
{{bc|<nowiki><br />
[metalgamer]<br />
Server = http://repo.metalgamer.eu/$arch<br />
</nowiki>}}<br />
<br />
==== pipelight ====<br />
<br />
* '''Maintainer:''' <br />
* '''Description:''' Pipelight and wine-compholio<br />
* '''Upstream page:''' [http://fds-team.de/ fds-team.de]<br />
* '''Key-ID:''' E49CC0415DC2D5CA<br />
* '''Keyfile:''' http://repos.fds-team.de/Release.key<br />
{{bc|<nowiki>[pipelight]<br />
Server = http://repos.fds-team.de/stable/arch/$arch</nowiki>}}<br />
<br />
==== repo-ck ====<br />
<br />
* '''Maintainer:''' [[User:Graysky|graysky]]<br />
* '''Description:''' Kernel and modules with Brain Fuck Scheduler and all the goodies in the ck1 patch set.<br />
* '''Upstream page:''' [http://repo-ck.com repo-ck.com]<br />
* '''Wiki:''' [[repo-ck]]<br />
* '''Key-ID:''' 5EE46C4C<br />
<br />
{{bc|<nowiki><br />
[repo-ck]<br />
Server = http://repo-ck.com/$arch<br />
</nowiki>}}<br />
<br />
==== sergej-repo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#spupykin Sergej Pupykin]<br />
* '''Description:''' psi-plus, owncloud-git, ziproxy, android, MySQL, and other stuff. Some packages also available for armv7h.<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[sergej-repo]<br />
Server = http://repo.p5n.pp.ru/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== alucryd ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing various packages Maxime Gauduin maintains (or not) in the AUR.<br />
<br />
{{bc|<nowiki><br />
[alucryd]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archaudio ====<br />
<br />
* '''Maintainer:''' [[User:Schivmeister|Ray Rashif]], [https://aur.archlinux.org/account/jhernberg Joakim Hernberg]<br />
* '''Description:''' Pro-audio packages<br />
<br />
{{bc|<nowiki><br />
[archaudio-production]<br />
Server = http://repos.archaudio.org/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archie-repo ====<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/Kalinda/ Kalinda]<br />
* '''Description:''' Repo for wine-silverlight, pipelight, and some misc packages.<br />
<br />
{{bc|<nowiki><br />
[archie-repo]<br />
Server = http://andontie.net/archie-repo/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxcn ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' The Chinese Arch Linux communities packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxcn]<br />
Server = http://repo.archlinuxcn.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxfr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
* '''Upstream page:''' http://afur.archlinux.fr<br />
<br />
{{bc|<nowiki><br />
[archlinuxfr]<br />
Server = http://repo.archlinux.fr/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgis ====<br />
{{Note|Off-line since 2014-03-29.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Maintainers needed - low bandwidth<br />
<br />
{{bc|<nowiki><br />
[archlinuxgis]<br />
Server = http://archlinuxgis.no-ip.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr]<br />
Server = http://archlinuxgr.tiven.org/archlinux/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr-kde4 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' KDE4 packages (plasmoids, themes etc) provided by the Hellenic (Greek) Arch Linux community<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-kde4]<br />
Server = http://archlinuxgr.tiven.org/archlinux-kde4/$arch<br />
</nowiki>}}<br />
<br />
==== archstuff ====<br />
{{Note|Off-line since 2014-01-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' AUR's most voted and many bin32-* and lib32-* packages.<br />
<br />
{{bc|<nowiki><br />
[archstuff]<br />
Server = http://archstuff.vs169092.vserver.de/$arch<br />
</nowiki>}}<br />
<br />
==== arsch ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' From users of orgizm.net<br />
<br />
{{bc|<nowiki><br />
[arsch]<br />
Server = http://arsch.orgizm.net/$arch<br />
</nowiki>}}<br />
<br />
==== aurbin ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Automated build of AUR packages<br />
<br />
{{bc|<nowiki><br />
[aurbin]<br />
Server = http://aurbin.net/$arch<br />
</nowiki>}}<br />
<br />
==== cinnamon ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable and actively developed Cinnamon packages (Applets, Themes, Extensions), plus others (Hotot, qBitTorrent, GTK themes, Perl modules, and more).<br />
<br />
{{bc|<nowiki><br />
[cinnamon]<br />
Server = http://archlinux.zoelife4u.org/cinnamon/$arch<br />
</nowiki>}}<br />
<br />
==== ede ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Equinox Desktop Environment repository<br />
<br />
{{bc|<nowiki><br />
[ede]<br />
Server = http://www.equinox-project.org/repos/arch/$arch<br />
</nowiki>}}<br />
<br />
==== haskell-core ====<br />
<br />
* '''Maintainer:''' Magnus Therning<br />
* '''Description:''' Arch-Haskell repository<br />
* '''Upstream page:''' https://github.com/archhaskell/habs<br />
<br />
{{bc|<nowiki><br />
[haskell-core]<br />
Server = http://xsounds.org/~haskell/core/$arch<br />
</nowiki>}}<br />
<br />
==== heftig ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#heftig Jan Steffens]<br />
* '''Description:''' Includes linux-zen and aurora (Firefox development build - works alongside {{Pkg|firefox}} in the ''extra'' repository).<br />
* '''Upstream page:''' https://bbs.archlinux.org/viewtopic.php?id=117157<br />
<br />
{{bc|<nowiki><br />
[heftig]<br />
Server = http://pkgbuild.com/~heftig/repo/$arch<br />
</nowiki>}}<br />
<br />
==== herecura-stable ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' additional packages not found in the ''community'' repository<br />
<br />
{{bc|<nowiki><br />
[herecura-stable]<br />
Server = http://repo.herecura.be/herecura-stable/$arch<br />
</nowiki>}}<br />
<br />
==== herecura-testing ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' additional packages for testing build against stable arch<br />
<br />
{{bc|<nowiki><br />
[herecura-testing]<br />
Server = http://repo.herecura.be/herecura-testing/$arch<br />
</nowiki>}}<br />
<br />
==== mesa-git ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Mesa git builds for the ''testing'' and ''multilib-testing'' repositories<br />
<br />
{{bc|<nowiki><br />
[mesa-git]<br />
Server = http://pkgbuild.com/~lcarlier/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== oracle ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Oracle database client<br />
<br />
{{Warning|By adding this you are agreeing to the Oracle license at http://www.oracle.com/technetwork/licenses/instant-client-lic-152016.html}}<br />
<br />
{{bc|<nowiki><br />
[oracle]<br />
Server = http://linux.shikadi.net/arch/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== pantheon ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing Pantheon-related packages<br />
<br />
{{bc|<nowiki><br />
[pantheon]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== paulburton-fitbitd ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains fitbitd for synchronizing FitBit trackers<br />
<br />
{{bc|<nowiki><br />
[paulburton-fitbitd]<br />
Server = http://www.paulburton.eu/arch/fitbitd/$arch<br />
</nowiki>}}<br />
<br />
==== pfkernel ====<br />
<br />
* '''Maintainer:''' [[User:Nous|nous]]<br />
* '''Description:''' Generic and optimized binaries of the ARCH kernel patched with BFS, TuxOnIce, BFQ, Aufs3, linux-pf, kernel26-pf, gdm-old, nvidia-pf, nvidia-96xx, xchat-greek, arora-git<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/11734958/x86_64/index.html or start at http://tiny.cc/linux-pf<br />
<br />
{{bc|<nowiki><br />
[pfkernel]<br />
Server = http://dl.dropbox.com/u/11734958/$arch<br />
</nowiki>}}<br />
<br />
==== suckless ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' suckless.org packages<br />
<br />
{{bc|<nowiki><br />
[suckless]<br />
Server = http://dl.suckless.org/arch/$arch<br />
</nowiki>}}<br />
<br />
==== unity ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity]<br />
Server = http://unity.xe-xe.org/$arch<br />
</nowiki>}}<br />
<br />
==== unity-extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity-extra]<br />
Server = http://unity.xe-xe.org/extra/$arch<br />
</nowiki>}}<br />
<br />
==== unity ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity]<br />
Server = http://unity.humbug.in/$arch<br />
</nowiki>}}<br />
<br />
==== unity-extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity-extra]<br />
Server = http://unity.humbug.in/extra/$arch<br />
</nowiki>}}<br />
<br />
==== home_tarakbumba_archlinux_Arch_Extra_standard ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains a few pre-built AUR packages (zemberek, firefox-kde-opensuse, etc.)<br />
<br />
{{bc|<nowiki><br />
[home_tarakbumba_archlinux_Arch_Extra_standard]<br />
Server = http://download.opensuse.org/repositories/home:/tarakbumba:/archlinux/Arch_Extra_standard/$arch<br />
</nowiki>}}<br />
<br />
== i686 only ==<br />
<br />
=== Signed ===<br />
<br />
==== eee-ck ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Kernel and modules optimized for Asus Eee PC 701, with -ck patchset.<br />
<br />
{{bc|<nowiki><br />
[eee-ck]<br />
Server = http://zembla.shatteredsymmetry.com/repo<br />
</nowiki>}}<br />
<br />
==== xyne-i686 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "i686" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-i686]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' each program I'm using on x86_64 is compiled for i686 too<br />
* '''Upstream page:''' http://andrwe.org/linux/repository<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/i686<br />
</nowiki>}}<br />
<br />
==== batchbin ====<br />
{{Expansion|Who is the maintainer?}}<br />
{{Note|Offline since 2014-02-15.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' My personal projects and utilities which I feel can benefit others.<br />
<br />
{{bc|<nowiki><br />
[batchbin]<br />
Server = http://batchbin.ueuo.com/archlinux<br />
</nowiki>}}<br />
<br />
==== esclinux ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Mostly games, interactive fiction, and abc notation stuff already on the AUR.<br />
<br />
{{bc|<nowiki><br />
[esclinux]<br />
Server = http://download.tuxfamily.org/esclinuxcd/ressources/repo/i686/<br />
</nowiki>}}<br />
<br />
==== kpiche ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable OpenSync packages.<br />
<br />
{{bc|<nowiki><br />
[kpiche]<br />
Server = http://kpiche.archlinux.ca/repo<br />
</nowiki>}}<br />
<br />
==== kernel26-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 2.6.39<br />
<br />
{{bc|<nowiki><br />
[kernel26-pae]<br />
Server = http://kernel26-pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== linux-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 3.0<br />
<br />
{{bc|<nowiki><br />
[linux-pae]<br />
Server = http://pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== rfad ====<br />
<br />
* '''Maintainer:''' requiem [at] archlinux.us <br />
* '''Description:''' Repository made by haxit<br />
<br />
{{bc|<nowiki><br />
[rfad]<br />
Server = http://web.ncf.ca/ey723/archlinux/repo/<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/i686<br />
</nowiki>}}<br />
<br />
== x86_64 only ==<br />
<br />
=== Signed ===<br />
<br />
==== apathism ====<br />
<br />
* '''Maintainer:''' Koryabkin Ivan ([https://aur.archlinux.org/account/apathism/ apathism])<br />
* '''Upstream page:''' https://apathism.net/<br />
* '''Description:''' AUR packages that would take long to build, such as {{AUR|firefox-kde-opensuse}}.<br />
* '''Key-ID:''' 3E37398D<br />
* '''Keyfile:''' http://apathism.net/archlinux/apathism.key<br />
<br />
{{bc|<nowiki><br />
[apathism]<br />
Server = http://apathism.net/archlinux/<br />
</nowiki>}}<br />
<br />
==== bioinformatics ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some software tools for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics]<br />
Server = http://decryptedepsilon.bl.ee/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== freifunk-rheinland ====<br />
<br />
* '''Maintainer:''' nomaster<br />
* '''Description:''' Packages for the Freifunk project: batman-adv, batctl, fastd and dependencies.<br />
<br />
{{bc|<nowiki><br />
[freifunk-rheinland]<br />
Server = http://mirror.fluxent.de/archlinux-custom/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== heimdal ====<br />
{{Note|Offline since 2014-03-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Packages are compiled against Heimdal instead of MIT KRB5. Meant to be dropped before {{ic|[core]}} in {{ic|pacman.conf}}. All packages are signed.<br />
* '''Upstream page:''' https://github.com/Kiwilight/Heimdal-Pkgbuilds<br />
{{Warning|Be careful. Do not use this unless you know what you are doing because many of these packages override packages from the ''core'' and ''extra'' repositories}}<br />
<br />
{{bc|<nowiki><br />
[heimdal]<br />
Server = http://www.kiwilight.com/heimdal/$arch/<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-multilib ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle multilib repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-multilib]<br />
Server = http://bohoomil.com/repo/multilib/$arch<br />
</nowiki>}}<br />
<br />
==== siosm-aur ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages also available in the Arch User Repository, sometimes with minor fixes<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-aur]<br />
Server = http://repo.siosm.fr/$repo/<br />
</nowiki>}}<br />
<br />
==== siosm-selinux ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages required for SELinux support – work in progress (notably, missing an Arch Linux-compatible SELinux policy). See the [[SELinux]] page for details.<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-selinux]<br />
Server = http://repo.siosm.fr/$repo/<br />
</nowiki>}}<br />
<br />
==== subtitlecomposer ====<br />
<br />
* '''Maintainer:''' Mladen Milinkovic (maxrd2)<br />
* '''Description:''' Subtitle Composer stable and nightly builds<br />
* '''Upstream page:''' https://github.com/maxrd2/subtitlecomposer<br />
* '''Key-ID:''' EA8CEBEE<br />
<br />
{{bc|<nowiki><br />
[subtitlecomposer]<br />
Server = http://smoothware.net/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== xyne-x86_64 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "x86_64" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-x86_64]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' contains programs I'm using on many systems<br />
* '''Upstream page:''' http://andrwe.dyndns.org/doku.php/blog/repository {{Dead link|2013|11|30}}<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/x86_64<br />
</nowiki>}}<br />
<br />
==== archstudio ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Audio and Music Packages optimized for Intel Core i3, i5, and i7.<br />
* '''Upstream page:''' http://www.xsounds.org/~archstudio<br />
<br />
{{bc|<nowiki><br />
[archstudio]<br />
Server = http://www.xsounds.org/~archstudio/x86_64<br />
</nowiki>}}<br />
<br />
==== brtln ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bpiotrowski Bartłomiej Piotrowski]<br />
* '''Description:''' Alpha releases of MariaDB, Wine with win32 support only, and some VCS packages.<br />
<br />
{{bc|<nowiki><br />
[brtln]<br />
Server = http://pkgbuild.com/~barthalion/brtln/$arch/<br />
</nowiki>}}<br />
<br />
==== hawaii ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' hawaii Qt5/Wayland-based desktop environment<br />
* '''Upstream page:''' http://www.maui-project.org/<br />
<br />
{{bc|<nowiki><br />
[hawaii]<br />
Server = http://archive.maui-project.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== pnsft-pur ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Japanese input method packages Mozc (vanilla) and libkkc<br />
<br />
{{bc|<nowiki><br />
[pnsft-pur]<br />
Server = http://downloads.sourceforge.net/project/pnsft-aur/pur/x86_64<br />
</nowiki>}}<br />
<br />
==== mingw-w64 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Almost all mingw-w64 packages in the AUR updated every 8 hours.<br />
* '''Upstream page:''' http://arch.linuxx.org<br />
<br />
{{bc|<nowiki><br />
[mingw-w64]<br />
Server = http://downloads.sourceforge.net/project/mingw-w64-archlinux/$arch<br />
Server = http://arch.linuxx.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== rightscale ====<br />
<br />
* '''Maintainer:''' Chris Fordham <chris@fordham-nagy.id.au><br />
* '''Description:''' Packages for RightScale including the RightLink cloud instance agent. Install the package, rightscale-agent.<br />
<br />
{{bc|<nowiki><br />
[rightscale]<br />
Server = https://s3-us-west-1.amazonaws.com/archlinux-rightscale/$arch<br />
</nowiki>}}<br />
<br />
==== seiichiro ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' VDR and some plugins, mms, foo2zjs-drivers<br />
<br />
{{bc|<nowiki><br />
[seiichiro]<br />
Server = http://repo.seiichiro0185.org/x86_64<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== zen ====<br />
{{Note|Offline since 2014-03-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Various and zengeist AUR packages.<br />
<br />
{{bc|<nowiki><br />
[zen]<br />
Server = http://zloduch.cz/archlinux/x86_64<br />
</nowiki>}}<br />
<br />
== armv6h only ==<br />
<br />
=== Unsigned ===<br />
<br />
==== arch-fook-armv6h ====<br />
<br />
* '''Maintainer:''' Jaska Kivelä <jaska@kivela.net><br />
* '''Description:''' Stuff that I have compiled for my Raspberry PI. Including Enlightenment and home automation stuff.<br />
<br />
{{bc|<nowiki><br />
[arch-fook-armv6h]<br />
Server = http://kivela.net/jaska/arch-fook-armv6h<br />
</nowiki>}}</div>Demizerhttps://wiki.archlinux.org/index.php?title=Unofficial_user_repositories&diff=311335Unofficial user repositories2014-04-21T14:54:01Z<p>Demizer: Update information about demz-repo-core.</p>
<hr />
<div>[[Category:Package management]]<br />
{{Related articles start}}<br />
{{Related|pacman-key}}<br />
{{Related|Official repositories}}<br />
{{Related articles end}} <br />
Because the AUR only allows users to upload PKGBUILD and other package build related files, but does not provide a means for distributing a binary package, a user may want to create a binary repository of their packages elsewhere. See [[Pacman Tips#Custom local repository]] for more information.<br />
<br />
If you have your own repository, please add it to this page, so that all the other users will know where to find your packages. Please keep the following rules when adding new repositories:<br />
<br />
* Keep the lists in alphabetical order.<br />
* Include some information about the maintainer: include at least a (nick)name and some form of contact information (web site, email address, user page on ArchWiki or the forums, etc.).<br />
* If the repository is of the ''signed'' variety, please include a key-id, possibly using it as the anchor for a link to its keyserver; if the key is not on a keyserver, include a link to the key file.<br />
* Include some short description (e.g. the category of packages provided in the repository).<br />
* If there is a page (either on ArchWiki or external) containing more information about the repository, include a link to it.<br />
* If possible, avoid using comments in code blocks. The formatted description is much more readable. Users who want some comments in their {{ic|pacman.conf}} can easily create it on their own.<br />
<br />
{{Note|If you are looking to add a signed repository to your {{ic|pacman.conf}}, you must be familiar with [[Pacman-key#Adding unofficial keys]].}}<br />
<br />
{{Expansion|Please fill in the missing information about maintainers.}}<br />
<br />
== Any ==<br />
<br />
"Any" repositories are architecture-independent. In other words, they can be used on both i686 and x86_64 systems.<br />
<br />
=== Signed ===<br />
<br />
==== bioinformatics-any ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some python packages and genome browser for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics-any]<br />
Server = http://decryptedepsilon.bl.ee/repo/any<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-fonts ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle-fonts repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-fonts]<br />
Server = http://bohoomil.com/repo/fonts<br />
</nowiki>}}<br />
<br />
==== xyne-any ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for "any" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{Note|Use this repository only if there is no matching {{ic|[xyne-*]}} repository for your architecture.}}<br />
<br />
{{bc|<nowiki><br />
[xyne-any]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== archlinuxgr-any ====<br />
* '''Maintainer:'''<br />
* '''Description:''' The Hellenic (Greek) unofficial Arch Linux repository with many interesting packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-any]<br />
Server = http://archlinuxgr.tiven.org/archlinux/any<br />
</nowiki>}}<br />
<br />
== Both i686 and x86_64 ==<br />
<br />
Repositories with both i686 and x86_64 versions. The {{ic|$arch}} variable will be set automatically by pacman.<br />
<br />
=== Signed ===<br />
<br />
==== arcanisrepo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#arcanis arcanis]<br />
* '''Description:''' A repository with some AUR packages including packages from VCS<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[arcanisrepo]<br />
Server = ftp://repo.arcanis.name/repo/$arch<br />
</nowiki>}}<br />
<br />
==== bbqlinux ====<br />
<br />
* '''Maintainer:''' [https://plus.google.com/u/0/+DanielHillenbrand/about Daniel Hillenbrand]<br />
* '''Description:''' Packages for Android Development<br />
* '''Upstream Page:''' http://bbqlinux.org/<br />
* '''Key-ID:''' Get the bbqlinux-keyring package, as it contains the needed keys.<br />
<br />
{{bc|<nowiki><br />
[bbqlinux]<br />
Server = http://packages.bbqlinux.org/$arch<br />
</nowiki>}}<br />
==== carstene1ns ====<br />
<br />
* '''Maintainer:''' [[User:Carstene1ns|Carsten Teibes]]<br />
* '''Description:''' AUR packages maintained and/or used by Carsten Teibes (games/Wii/lib32/Python)<br />
* '''Upstream page:''' http://arch.carsten-teibes.de (still under construction)<br />
* '''Key-ID:''' 2476B20B<br />
<br />
{{bc|<nowiki><br />
[carstene1ns]<br />
Server = http://repo.carsten-teibes.de/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0 | Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst]<br />
Server = http://catalyst.wirephire.com/repo/catalyst/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://70.239.162.206/catalyst-mirror/repo/catalyst/$arch<br />
#Server = http://mirror.rts-informatique.fr/archlinux-catalyst/repo/catalyst/$arch<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst/$arch<br />
</nowiki>}}<br />
<br />
==== catalyst-hd234k ====<br />
<br />
* '''Maintainer:''' [[User:Vi0L0 | Vi0l0]]<br />
* '''Description:''' ATI Catalyst proprietary drivers.<br />
* '''Upstream Page:''' http://catalyst.wirephire.com<br />
* '''Key-ID:''' 653C3094<br />
<br />
{{bc|<nowiki><br />
[catalyst-hd234k]<br />
Server = http://catalyst.wirephire.com/repo/catalyst-hd234k/$arch<br />
## Mirrors, if the primary server does not work or is too slow:<br />
#Server = http://70.239.162.206/catalyst-mirror/repo/catalyst-hd234k/$arch<br />
#Server = http://mirror.rts-informatique.fr/archlinux-catalyst/repo/catalyst-hd234k/$arch<br />
#Server = http://mirror.hactar.bz/Vi0L0/catalyst-hd234k/$arch<br />
</nowiki>}}<br />
<br />
==== city ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bgyorgy Balló György]<br />
* '''Description:''' Experimental/unpopular packages.<br />
* '''Upstream page:''' http://pkgbuild.com/~bgyorgy/city.html<br />
* '''Key-ID:''' Not needed, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[city]<br />
Server = http://pkgbuild.com/~bgyorgy/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== crypto ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Includes tomb, tomb-git, and other related software.<br />
<br />
{{bc|<nowiki><br />
[crypto]<br />
Server = http://tomb.dyne.org/arch_repo/$arch<br />
</nowiki>}}<br />
<br />
==== demz-repo-core ====<br />
<br />
* '''Maintainer:''' Jesus Alvarez (demizer)<br />
* '''Description:''' Packages for ZFS on Arch Linux.<br />
* '''Upstream page:''' http://demizerone.com/archzfs<br />
* '''Key-ID:''' 0EE7A126<br />
<br />
{{bc|<nowiki><br />
[demz-repo-core]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== demz-repo-archiso ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Packages for installing ZFSfrom an Arch ISO live disk<br />
* '''Upstream page:''' http://demizerone.com/archzfs<br />
* '''Key-ID:''' 0EE7A126 (temporarily using key 5EE46C4C [https://aur.archlinux.org/packages/zfs/?comments=all])<br />
<br />
{{bc|<nowiki><br />
[demz-repo-archiso]<br />
Server = http://demizerone.com/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== infinality-bundle ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle main repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle]<br />
Server = http://bohoomil.com/repo/$arch<br />
</nowiki>}}<br />
<br />
==== metalgamer ====<br />
<br />
* '''Maintainer:''' [http://metalgamer.eu/ metalgamer]<br />
* '''Description:''' Packages I use and/or maintain on the AUR.<br />
* '''Key ID:''' F55313FB<br />
<br />
{{bc|<nowiki><br />
[metalgamer]<br />
Server = http://repo.metalgamer.eu/$arch<br />
</nowiki>}}<br />
<br />
==== pipelight ====<br />
<br />
* '''Maintainer:''' <br />
* '''Description:''' Pipelight and wine-compholio<br />
* '''Upstream page:''' [http://fds-team.de/ fds-team.de]<br />
* '''Key-ID:''' E49CC0415DC2D5CA<br />
* '''Keyfile:''' http://repos.fds-team.de/Release.key<br />
{{bc|<nowiki>[pipelight]<br />
Server = http://repos.fds-team.de/stable/arch/$arch</nowiki>}}<br />
<br />
==== repo-ck ====<br />
<br />
* '''Maintainer:''' [[User:Graysky|graysky]]<br />
* '''Description:''' Kernel and modules with Brain Fuck Scheduler and all the goodies in the ck1 patch set.<br />
* '''Upstream page:''' [http://repo-ck.com repo-ck.com]<br />
* '''Wiki:''' [[repo-ck]]<br />
* '''Key-ID:''' 5EE46C4C<br />
<br />
{{bc|<nowiki><br />
[repo-ck]<br />
Server = http://repo-ck.com/$arch<br />
</nowiki>}}<br />
<br />
==== sergej-repo ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#spupykin Sergej Pupykin]<br />
* '''Description:''' psi-plus, owncloud-git, ziproxy, android, MySQL, and other stuff. Some packages also available for armv7h.<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{bc|<nowiki><br />
[sergej-repo]<br />
Server = http://repo.p5n.pp.ru/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== alucryd ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing various packages Maxime Gauduin maintains (or not) in the AUR.<br />
<br />
{{bc|<nowiki><br />
[alucryd]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archaudio ====<br />
<br />
* '''Maintainer:''' [[User:Schivmeister|Ray Rashif]], [https://aur.archlinux.org/account/jhernberg Joakim Hernberg]<br />
* '''Description:''' Pro-audio packages<br />
<br />
{{bc|<nowiki><br />
[archaudio-production]<br />
Server = http://repos.archaudio.org/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== archie-repo ====<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/Kalinda/ Kalinda]<br />
* '''Description:''' Repo for wine-silverlight, pipelight, and some misc packages.<br />
<br />
{{bc|<nowiki><br />
[archie-repo]<br />
Server = http://andontie.net/archie-repo/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxcn ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' The Chinese Arch Linux communities packages.<br />
<br />
{{bc|<nowiki><br />
[archlinuxcn]<br />
Server = http://repo.archlinuxcn.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxfr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
* '''Upstream page:''' http://afur.archlinux.fr<br />
<br />
{{bc|<nowiki><br />
[archlinuxfr]<br />
Server = http://repo.archlinux.fr/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgis ====<br />
{{Note|Off-line since 2014-03-29.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Maintainers needed - low bandwidth<br />
<br />
{{bc|<nowiki><br />
[archlinuxgis]<br />
Server = http://archlinuxgis.no-ip.org/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:'''<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr]<br />
Server = http://archlinuxgr.tiven.org/archlinux/$arch<br />
</nowiki>}}<br />
<br />
==== archlinuxgr-kde4 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' KDE4 packages (plasmoids, themes etc) provided by the Hellenic (Greek) Arch Linux community<br />
<br />
{{bc|<nowiki><br />
[archlinuxgr-kde4]<br />
Server = http://archlinuxgr.tiven.org/archlinux-kde4/$arch<br />
</nowiki>}}<br />
<br />
==== archstuff ====<br />
{{Note|Off-line since 2014-01-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' AUR's most voted and many bin32-* and lib32-* packages.<br />
<br />
{{bc|<nowiki><br />
[archstuff]<br />
Server = http://archstuff.vs169092.vserver.de/$arch<br />
</nowiki>}}<br />
<br />
==== arsch ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' From users of orgizm.net<br />
<br />
{{bc|<nowiki><br />
[arsch]<br />
Server = http://arsch.orgizm.net/$arch<br />
</nowiki>}}<br />
<br />
==== aurbin ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Automated build of AUR packages<br />
<br />
{{bc|<nowiki><br />
[aurbin]<br />
Server = http://aurbin.net/$arch<br />
</nowiki>}}<br />
<br />
==== cinnamon ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable and actively developed Cinnamon packages (Applets, Themes, Extensions), plus others (Hotot, qBitTorrent, GTK themes, Perl modules, and more).<br />
<br />
{{bc|<nowiki><br />
[cinnamon]<br />
Server = http://archlinux.zoelife4u.org/cinnamon/$arch<br />
</nowiki>}}<br />
<br />
==== ede ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Equinox Desktop Environment repository<br />
<br />
{{bc|<nowiki><br />
[ede]<br />
Server = http://www.equinox-project.org/repos/arch/$arch<br />
</nowiki>}}<br />
<br />
==== haskell-core ====<br />
<br />
* '''Maintainer:''' Magnus Therning<br />
* '''Description:''' Arch-Haskell repository<br />
* '''Upstream page:''' https://github.com/archhaskell/habs<br />
<br />
{{bc|<nowiki><br />
[haskell-core]<br />
Server = http://xsounds.org/~haskell/core/$arch<br />
</nowiki>}}<br />
<br />
==== heftig ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#heftig Jan Steffens]<br />
* '''Description:''' Includes linux-zen and aurora (Firefox development build - works alongside {{Pkg|firefox}} in the ''extra'' repository).<br />
* '''Upstream page:''' https://bbs.archlinux.org/viewtopic.php?id=117157<br />
<br />
{{bc|<nowiki><br />
[heftig]<br />
Server = http://pkgbuild.com/~heftig/repo/$arch<br />
</nowiki>}}<br />
<br />
==== herecura-stable ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' additional packages not found in the ''community'' repository<br />
<br />
{{bc|<nowiki><br />
[herecura-stable]<br />
Server = http://repo.herecura.be/herecura-stable/$arch<br />
</nowiki>}}<br />
<br />
==== herecura-testing ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' additional packages for testing build against stable arch<br />
<br />
{{bc|<nowiki><br />
[herecura-testing]<br />
Server = http://repo.herecura.be/herecura-testing/$arch<br />
</nowiki>}}<br />
<br />
==== mesa-git ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Mesa git builds for the ''testing'' and ''multilib-testing'' repositories<br />
<br />
{{bc|<nowiki><br />
[mesa-git]<br />
Server = http://pkgbuild.com/~lcarlier/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== oracle ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Oracle database client<br />
<br />
{{Warning|By adding this you are agreeing to the Oracle license at http://www.oracle.com/technetwork/licenses/instant-client-lic-152016.html}}<br />
<br />
{{bc|<nowiki><br />
[oracle]<br />
Server = http://linux.shikadi.net/arch/$repo/$arch/<br />
</nowiki>}}<br />
<br />
==== pantheon ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#alucryd Maxime Gauduin]<br />
* '''Description:''' Repository containing Pantheon-related packages<br />
<br />
{{bc|<nowiki><br />
[pantheon]<br />
Server = http://pkgbuild.com/~alucryd/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== paulburton-fitbitd ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains fitbitd for synchronizing FitBit trackers<br />
<br />
{{bc|<nowiki><br />
[paulburton-fitbitd]<br />
Server = http://www.paulburton.eu/arch/fitbitd/$arch<br />
</nowiki>}}<br />
<br />
==== pfkernel ====<br />
<br />
* '''Maintainer:''' [[User:Nous|nous]]<br />
* '''Description:''' Generic and optimized binaries of the ARCH kernel patched with BFS, TuxOnIce, BFQ, Aufs3, linux-pf, kernel26-pf, gdm-old, nvidia-pf, nvidia-96xx, xchat-greek, arora-git<br />
* '''Note:''' To browse through the repository, one needs to append {{ic|index.html}} after the server URL (this is an intentional quirk of Dropbox). For example, for x86_64, point your browser to http://dl.dropbox.com/u/11734958/x86_64/index.html or start at http://tiny.cc/linux-pf<br />
<br />
{{bc|<nowiki><br />
[pfkernel]<br />
Server = http://dl.dropbox.com/u/11734958/$arch<br />
</nowiki>}}<br />
<br />
==== suckless ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' suckless.org packages<br />
<br />
{{bc|<nowiki><br />
[suckless]<br />
Server = http://dl.suckless.org/arch/$arch<br />
</nowiki>}}<br />
<br />
==== unity ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity]<br />
Server = http://unity.xe-xe.org/$arch<br />
</nowiki>}}<br />
<br />
==== unity-extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity-extra]<br />
Server = http://unity.xe-xe.org/extra/$arch<br />
</nowiki>}}<br />
<br />
==== unity ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity]<br />
Server = http://unity.humbug.in/$arch<br />
</nowiki>}}<br />
<br />
==== unity-extra ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' unity extra packages for Arch<br />
<br />
{{bc|<nowiki><br />
[unity-extra]<br />
Server = http://unity.humbug.in/extra/$arch<br />
</nowiki>}}<br />
<br />
==== home_tarakbumba_archlinux_Arch_Extra_standard ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Contains a few pre-built AUR packages (zemberek, firefox-kde-opensuse, etc.)<br />
<br />
{{bc|<nowiki><br />
[home_tarakbumba_archlinux_Arch_Extra_standard]<br />
Server = http://download.opensuse.org/repositories/home:/tarakbumba:/archlinux/Arch_Extra_standard/$arch<br />
</nowiki>}}<br />
<br />
== i686 only ==<br />
<br />
=== Signed ===<br />
<br />
==== eee-ck ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Kernel and modules optimized for Asus Eee PC 701, with -ck patchset.<br />
<br />
{{bc|<nowiki><br />
[eee-ck]<br />
Server = http://zembla.shatteredsymmetry.com/repo<br />
</nowiki>}}<br />
<br />
==== xyne-i686 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "i686" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required, as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-i686]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' each program I'm using on x86_64 is compiled for i686 too<br />
* '''Upstream page:''' http://andrwe.org/linux/repository<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/i686<br />
</nowiki>}}<br />
<br />
==== batchbin ====<br />
{{Expansion|Who is the maintainer?}}<br />
{{Note|Offline since 2014-02-15.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' My personal projects and utilities which I feel can benefit others.<br />
<br />
{{bc|<nowiki><br />
[batchbin]<br />
Server = http://batchbin.ueuo.com/archlinux<br />
</nowiki>}}<br />
<br />
==== esclinux ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Mostly games, interactive fiction, and abc notation stuff already on the AUR.<br />
<br />
{{bc|<nowiki><br />
[esclinux]<br />
Server = http://download.tuxfamily.org/esclinuxcd/ressources/repo/i686/<br />
</nowiki>}}<br />
<br />
==== kpiche ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Stable OpenSync packages.<br />
<br />
{{bc|<nowiki><br />
[kpiche]<br />
Server = http://kpiche.archlinux.ca/repo<br />
</nowiki>}}<br />
<br />
==== kernel26-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 2.6.39<br />
<br />
{{bc|<nowiki><br />
[kernel26-pae]<br />
Server = http://kernel26-pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== linux-pae ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' PAE-enabled 32-bit kernel 3.0<br />
<br />
{{bc|<nowiki><br />
[linux-pae]<br />
Server = http://pae.archlinux.ca/<br />
</nowiki>}}<br />
<br />
==== rfad ====<br />
<br />
* '''Maintainer:''' requiem [at] archlinux.us <br />
* '''Description:''' Repository made by haxit<br />
<br />
{{bc|<nowiki><br />
[rfad]<br />
Server = http://web.ncf.ca/ey723/archlinux/repo/<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/i686<br />
</nowiki>}}<br />
<br />
== x86_64 only ==<br />
<br />
=== Signed ===<br />
<br />
==== apathism ====<br />
<br />
* '''Maintainer:''' Koryabkin Ivan ([https://aur.archlinux.org/account/apathism/ apathism])<br />
* '''Upstream page:''' https://apathism.net/<br />
* '''Description:''' AUR packages that would take long to build, such as {{AUR|firefox-kde-opensuse}}.<br />
* '''Key-ID:''' 3E37398D<br />
* '''Keyfile:''' http://apathism.net/archlinux/apathism.key<br />
<br />
{{bc|<nowiki><br />
[apathism]<br />
Server = http://apathism.net/archlinux/<br />
</nowiki>}}<br />
<br />
==== bioinformatics ====<br />
<br />
* '''Maintainer:''' [https://aur.archlinux.org/account/decryptedepsilon/ decryptedepsilon]<br />
* '''Description:''' A repository containing some software tools for Bioinformatics<br />
* '''Key-ID:''' 60442BA4<br />
<br />
{{bc|<nowiki><br />
[bioinformatics]<br />
Server = http://decryptedepsilon.bl.ee/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== freifunk-rheinland ====<br />
<br />
* '''Maintainer:''' nomaster<br />
* '''Description:''' Packages for the Freifunk project: batman-adv, batctl, fastd and dependencies.<br />
<br />
{{bc|<nowiki><br />
[freifunk-rheinland]<br />
Server = http://mirror.fluxent.de/archlinux-custom/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== heimdal ====<br />
{{Note|Offline since 2014-03-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Packages are compiled against Heimdal instead of MIT KRB5. Meant to be dropped before {{ic|[core]}} in {{ic|pacman.conf}}. All packages are signed.<br />
* '''Upstream page:''' https://github.com/Kiwilight/Heimdal-Pkgbuilds<br />
{{Warning|Be careful. Do not use this unless you know what you are doing because many of these packages override packages from the ''core'' and ''extra'' repositories}}<br />
<br />
{{bc|<nowiki><br />
[heimdal]<br />
Server = http://www.kiwilight.com/heimdal/$arch/<br />
</nowiki>}}<br />
<br />
==== infinality-bundle-multilib ====<br />
<br />
* '''Maintainer:''' [http://bohoomil.com/ bohoomil]<br />
* '''Description:''' infinality-bundle multilib repository.<br />
* '''Upstream page:''' [http://bohoomil.com/ Infinality bundle & fonts]<br />
* '''Key-ID:''' 962DDE58<br />
<br />
{{bc|<nowiki><br />
[infinality-bundle-multilib]<br />
Server = http://bohoomil.com/repo/multilib/$arch<br />
</nowiki>}}<br />
<br />
==== siosm-aur ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages also available in the Arch User Repository, sometimes with minor fixes<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-aur]<br />
Server = http://repo.siosm.fr/$repo/<br />
</nowiki>}}<br />
<br />
==== siosm-selinux ====<br />
<br />
* '''Maintainer:''' [https://tim.siosm.fr/about/ Timothee Ravier]<br />
* '''Description:''' packages required for SELinux support – work in progress (notably, missing an Arch Linux-compatible SELinux policy). See the [[SELinux]] page for details.<br />
* '''Upstream page:''' https://tim.siosm.fr/repositories/<br />
* '''Key-ID:''' 78688F83<br />
<br />
{{bc|<nowiki><br />
[siosm-selinux]<br />
Server = http://repo.siosm.fr/$repo/<br />
</nowiki>}}<br />
<br />
==== subtitlecomposer ====<br />
<br />
* '''Maintainer:''' Mladen Milinkovic (maxrd2)<br />
* '''Description:''' Subtitle Composer stable and nightly builds<br />
* '''Upstream page:''' https://github.com/maxrd2/subtitlecomposer<br />
* '''Key-ID:''' EA8CEBEE<br />
<br />
{{bc|<nowiki><br />
[subtitlecomposer]<br />
Server = http://smoothware.net/$repo/$arch<br />
</nowiki>}}<br />
<br />
==== xyne-x86_64 ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#xyne Xyne]<br />
* '''Description:''' A repository for Xyne's own projects containing packages for the "x86_64" architecture.<br />
* '''Upstream page:''' http://xyne.archlinux.ca/projects/<br />
* '''Key-ID:''' Not required as maintainer is a TU<br />
<br />
{{Note|This includes all packages in [[#xyne-any|<nowiki>[xyne-any]</nowiki>]].}}<br />
<br />
{{bc|<nowiki><br />
[xyne-x86_64]<br />
Server = http://xyne.archlinux.ca/repos/xyne<br />
</nowiki>}}<br />
<br />
=== Unsigned ===<br />
<br />
{{Note|Users will need to add the following to these entries: {{ic|1=SigLevel = PackageOptional}}}}<br />
<br />
==== andrwe ====<br />
<br />
* '''Maintainer:''' Andrwe Lord Weber<br />
* '''Description:''' contains programs I'm using on many systems<br />
* '''Upstream page:''' http://andrwe.dyndns.org/doku.php/blog/repository {{Dead link|2013|11|30}}<br />
<br />
{{bc|<nowiki><br />
[andrwe]<br />
Server = http://repo.andrwe.org/x86_64<br />
</nowiki>}}<br />
<br />
==== archstudio ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Audio and Music Packages optimized for Intel Core i3, i5, and i7.<br />
* '''Upstream page:''' http://www.xsounds.org/~archstudio<br />
<br />
{{bc|<nowiki><br />
[archstudio]<br />
Server = http://www.xsounds.org/~archstudio/x86_64<br />
</nowiki>}}<br />
<br />
==== brtln ====<br />
<br />
* '''Maintainer:''' [https://www.archlinux.org/trustedusers/#bpiotrowski Bartłomiej Piotrowski]<br />
* '''Description:''' Alpha releases of MariaDB, Wine with win32 support only, and some VCS packages.<br />
<br />
{{bc|<nowiki><br />
[brtln]<br />
Server = http://pkgbuild.com/~barthalion/brtln/$arch/<br />
</nowiki>}}<br />
<br />
==== hawaii ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' hawaii Qt5/Wayland-based desktop environment<br />
* '''Upstream page:''' http://www.maui-project.org/<br />
<br />
{{bc|<nowiki><br />
[hawaii]<br />
Server = http://archive.maui-project.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== pnsft-pur ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Japanese input method packages Mozc (vanilla) and libkkc<br />
<br />
{{bc|<nowiki><br />
[pnsft-pur]<br />
Server = http://downloads.sourceforge.net/project/pnsft-aur/pur/x86_64<br />
</nowiki>}}<br />
<br />
==== mingw-w64 ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Almost all mingw-w64 packages in the AUR updated every 8 hours.<br />
* '''Upstream page:''' http://arch.linuxx.org<br />
<br />
{{bc|<nowiki><br />
[mingw-w64]<br />
Server = http://downloads.sourceforge.net/project/mingw-w64-archlinux/$arch<br />
Server = http://arch.linuxx.org/archlinux/$repo/os/$arch<br />
</nowiki>}}<br />
<br />
==== rightscale ====<br />
<br />
* '''Maintainer:''' Chris Fordham <chris@fordham-nagy.id.au><br />
* '''Description:''' Packages for RightScale including the RightLink cloud instance agent. Install the package, rightscale-agent.<br />
<br />
{{bc|<nowiki><br />
[rightscale]<br />
Server = https://s3-us-west-1.amazonaws.com/archlinux-rightscale/$arch<br />
</nowiki>}}<br />
<br />
==== seiichiro ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' VDR and some plugins, mms, foo2zjs-drivers<br />
<br />
{{bc|<nowiki><br />
[seiichiro]<br />
Server = http://repo.seiichiro0185.org/x86_64<br />
</nowiki>}}<br />
<br />
==== studioidefix ====<br />
<br />
* '''Maintainer:'''<br />
* '''Description:''' Precompiled boxee packages.<br />
<br />
{{bc|<nowiki><br />
[studioidefix]<br />
Server = http://studioidefix.googlecode.com/hg/repo/x86_64<br />
</nowiki>}}<br />
<br />
==== zen ====<br />
{{Note|Offline since 2014-03-06.}}<br />
* '''Maintainer:'''<br />
* '''Description:''' Various and zengeist AUR packages.<br />
<br />
{{bc|<nowiki><br />
[zen]<br />
Server = http://zloduch.cz/archlinux/x86_64<br />
</nowiki>}}<br />
<br />
== armv6h only ==<br />
<br />
=== Unsigned ===<br />
<br />
==== arch-fook-armv6h ====<br />
<br />
* '''Maintainer:''' Jaska Kivelä <jaska@kivela.net><br />
* '''Description:''' Stuff that I have compiled for my Raspberry PI. Including Enlightenment and home automation stuff.<br />
<br />
{{bc|<nowiki><br />
[arch-fook-armv6h]<br />
Server = http://kivela.net/jaska/arch-fook-armv6h<br />
</nowiki>}}</div>Demizerhttps://wiki.archlinux.org/index.php?title=ZFS_Installation&diff=311331ZFS Installation2014-04-21T14:52:13Z<p>Demizer: Remove section containing a tip about dependency resolution.</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|ZFS}}<br />
{{Related|Playing with ZFS}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related|ZFS on FUSE}}<br />
{{Related articles end}}<br />
<br />
This article provides several options for the installation of the requisite ZFS software packages.<br />
<br />
==Building from AUR==<br />
[[AUR]] contains {{AUR|zfs-git}} and {{AUR|zfs-lts}} packages for building ZFSonLinux. The git packages are required if zfs is to be used with the default kernel. ZFSonLinux.org is slow to make official releases, but kernel API changes are made often that require up-to-date ZFS sources from github to build successfully.<br />
<br />
The ZFS kernel module and related utils are available in the [[AUR]]; all are required:<br />
<br />
*{{AUR|spl-utils-git}}<br />
*{{AUR|spl-git}}<br />
*{{AUR|zfs-utils-git}}<br />
*{{AUR|zfs-git}}<br />
<br />
For users that are concerned with using git based packages, the {{AUR|zfs-lts}} packages are available in [[AUR]] that use the sources officially released by ZFSonLinux.org:<br />
<br />
*{{AUR|spl-utils-lts}}<br />
*{{AUR|spl-lts}}<br />
*{{AUR|zfs-utils-lts}}<br />
*{{AUR|zfs-lts}}<br />
<br />
{{note|The ZFS and SPL (Solaris Porting Layer is a Linux kernel module which provides many of the Solaris kernel APIs) kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the archzfs repository.}}<br />
<br />
=== Automated build script ===<br />
The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:<br />
*{{pkg|sudo}} - Note that your user needed sudo rights to {{ic|/usr/bin/clean-chroot-manager}} for the script below to work.<br />
*{{pkg|rsync}} - Needed for moving over the build files.<br />
*{{AUR|cower}} - Needed to grab sources from the AUR.<br />
*{{AUR|clean-chroot-manager}} - Needed to build in a clean chroot and add packages to a local repo.<br />
<br />
Be sure to add the local repo to {{ic|/etc/pacman.conf}} like so:<br />
{{hc|$ tail /etc/pacman.conf|<nowiki><br />
[chroot_local]<br />
SigLevel = Optional TrustAll<br />
Server = file:///path/to/localrepo/defined/below<br />
</nowiki>}}<br />
<br />
{{hc|~/bin/build_zfs|<nowiki><br />
#!/bin/bash<br />
#<br />
# ZFS Builder by graysky<br />
#<br />
<br />
# define the temp space for building here<br />
WORK='/scratch'<br />
<br />
# create this dir and chown it to your user<br />
# this is the local repo which will store your zfs packages<br />
REPO='/var/repo'<br />
<br />
# Add the following entry to /etc/pacman.conf for the local repo<br />
#[chroot_local]<br />
#SigLevel = Optional TrustAll<br />
#Server = file:///path/to/localrepo/defined/above<br />
<br />
for i in rsync cower clean-chroot-manager; do<br />
command -v $i >/dev/null 2>&1 || {<br />
echo "I require $i but it's not installed. Aborting." >&2<br />
exit 1; }<br />
done<br />
<br />
[[ -f ~/.config/clean-chroot-manager.conf ]] &&<br />
. ~/.config/clean-chroot-manager.conf || exit 1<br />
<br />
[[ ! -d "$REPO" ]] &&<br />
echo "Make the dir for your local repo and chown it: $REPO" && exit 1<br />
<br />
[[ ! -d "$WORK" ]] &&<br />
echo "Make a work directory: $WORK" && exit 1<br />
<br />
cd "$WORK"<br />
for i in spl-utils spl zfs-utils zfs; do<br />
[[ -d $i ]] && rm -rf $i<br />
cower -d $i<br />
done<br />
<br />
for i in spl-utils spl zfs-utils zfs; do<br />
cd "$WORK/$i"<br />
sudo ccm s<br />
done<br />
<br />
rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"<br />
</nowiki>}}<br />
<br />
==Unofficial Repository==<br />
<br />
{{Poor writing|Does not conform with [[Help:Style#Package management instructions]].}}<br />
<br />
For fast and effortless installation and updates, the [https://github.com/demizer/archzfs "archzfs"] signed repository is available to add to your {{ic|pacman.conf}}. The details of the repositories are given [[Unofficial user repositories#demz-repo-core | here]].<br />
<br />
Once the [[Pacman-key#Adding_unofficial_keys | key has been signed]], it is now possible to update the package database and install ZFS packages:<br />
<br />
# pacman -S archzfs<br />
<br />
===Archiso Tracking Repository===<br />
<br />
ZFS can easily be used from within the archiso live environment by using the special archiso tracking repository for ZFS. This repository makes it easy to install Arch Linux on a root ZFS filesystem, or to mount ZFS pools from within an archiso live environment using an up-to-date live medium. The details for using this repository from a live environment are given [[Unofficial user repositories#demz-repo-archiso | here]]<br />
<br />
This repository and packages are also signed, so the key must be locally signed following the steps listed in the previous section before use. For a guide on how to install Arch Linux on to a root ZFS filesystem, see [[Installing Arch Linux on ZFS]].<br />
<br />
When you have the above steps the process is as follows - do not use pacman -Syyu as this will require a bunch of signatures and build against a kernel that isn't running from the iso. Instead do:<br />
<br />
# pacman -Syy<br />
<br />
# pacman -S archzfs<br />
<br />
If this succeeded then running zfs status should give some output other than a kernel insmod error. You may then want to partition your drives using gdisk or something along those lines. If you previously partitioned your drives with zfs you're going to need to do a zfs upgrade. Make sure you have a snapshot backed up or a backup image if you have valuable data before doing this though.</div>Demizer