https://wiki.archlinux.org/api.php?action=feedcontributions&user=Mouseman&feedformat=atomArchWiki - User contributions [en]2024-03-29T14:08:16ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Power_management&diff=704664Power management2021-12-06T16:29:37Z<p>Mouseman: /* PC won't wake from sleep on B550I motherboards */ correct a invalid link</p>
<hr />
<div>[[Category:Power management]]<br />
[[es:Power management]]<br />
[[ja:電源管理]]<br />
[[zh-hans:Power management]]<br />
{{Related articles start}}<br />
{{Related|Power management/Suspend and hibernate}}<br />
{{Related|Display Power Management Signaling}}<br />
{{Related|CPU frequency scaling}}<br />
{{Related|Hybrid graphics}}<br />
{{Related|Kernel modules}}<br />
{{Related|sysctl}}<br />
{{Related|udev}}<br />
{{Related articles end}}<br />
[[Wikipedia:Power management|Power management]] is a feature that turns off the power or switches system's components to a low-power state when inactive.<br />
<br />
In Arch Linux, power management consists of two main parts:<br />
<br />
# Configuration of the Linux kernel, which interacts with the hardware.<br />
#* [[Kernel parameters]]<br />
#* [[Kernel modules]]<br />
#* [[udev]] rules<br />
# Configuration of userspace tools, which interact with the kernel and react to its events. Many userspace tools also allow to modify kernel configuration in a "user-friendly" way. See [[#Userspace tools]] for the options.<br />
<br />
== Userspace tools ==<br />
<br />
Using these tools can replace setting a lot of settings by hand. Only run '''one''' of these tools to avoid possible conflicts as they all work more or less similarly. Have a look at the [[:Category:Power management|power management category]] to get an overview on what power management options exist in Arch Linux.<br />
<br />
These are the more popular scripts and tools designed to help power saving:<br />
<br />
=== Console ===<br />
<br />
* {{App|[[acpid]]| A daemon for delivering ACPI power management events with netlink support.|https://sourceforge.net/projects/acpid2/|{{Pkg|acpid}}}}<br />
* {{App|[[Laptop Mode Tools]]|Utility to configure laptop power saving settings, considered by many to be the de facto utility for power saving though may take a bit of configuration.|https://github.com/rickysarraf/laptop-mode-tools|{{AUR|laptop-mode-tools}}}}<br />
* {{App|libsmbios|Library and tools for interacting with Dell SMBIOS tables.|https://github.com/dell/libsmbios|{{Pkg|libsmbios}}}}<br />
* {{App|[[powertop]]|A tool to diagnose issues with power consumption and power management to help set power saving settings.|https://01.org/powertop/|{{Pkg|powertop}}}}<br />
* {{App|[[systemd]]|A system and service manager.|https://freedesktop.org/wiki/Software/systemd/|{{Pkg|systemd}}}}<br />
* {{App|[[TLP]]|Advanced power management for Linux.|https://linrunner.de/tlp|{{Pkg|tlp}}}}<br />
<br />
=== Graphical ===<br />
<br />
* {{App|batterymon-clone|Simple battery monitor tray icon.|https://github.com/jareksed/batterymon-clone|{{AUR|batterymon-clone}}}}<br />
* {{App|batsignal|Lightweight battery monitor that uses libnotify to warn of low battery levels.|https://github.com/electrickite/batsignal|{{AUR|batsignal}}}}<br />
* {{App|cbatticon|Lightweight and fast battery icon that sits in your system tray.|https://github.com/valr/cbatticon|{{Pkg|cbatticon}}}}<br />
* {{App|GNOME Power Statistics|System power information and statistics for GNOME.|https://gitlab.gnome.org/GNOME/gnome-power-manager|{{Pkg|gnome-power-manager}}}}<br />
* {{App|KDE Power Devil|Power management module for Plasma.|https://invent.kde.org/plasma/powerdevil|{{Pkg|powerdevil}}}}<br />
* {{App|LXQt Power Management|Power management module for LXQt.|https://github.com/lxqt/lxqt-powermanagement|{{Pkg|lxqt-powermanagement}}}}<br />
* {{App|MATE Power Management|Power management tool for MATE.|https://github.com/mate-desktop/mate-power-manager|{{Pkg|mate-power-manager}}}}<br />
* {{App|MATE Power Statistics|System power information and statistics for MATE.|https://github.com/mate-desktop/mate-power-manager|{{Pkg|mate-power-manager}}}}<br />
* {{App|powerkit|Desktop independent power manager.|https://github.com/rodlie/powerkit|{{AUR|powerkit}}}}<br />
* {{App|Xfce Power Manager|Power manager for Xfce.|https://docs.xfce.org/xfce/xfce4-power-manager/start|{{Pkg|xfce4-power-manager}}}}<br />
* {{App|vattery|Battery monitoring application written in Vala that will display the status of a laptop battery in a system tray.|https://www.jezra.net/projects/vattery.html|{{AUR|vattery}}}}<br />
<br />
== Power management with systemd ==<br />
<br />
=== ACPI events ===<br />
<br />
''systemd'' handles some power-related [[Wikipedia:Advanced_Configuration_and_Power_Interface|ACPI]] events, whose actions can be configured in {{ic|/etc/systemd/logind.conf}} or {{ic|/etc/systemd/logind.conf.d/*.conf}} — see {{man|5|logind.conf}}. On systems with no dedicated power manager, this may replace the [[acpid]] daemon which is usually used to react to these ACPI events.<br />
<br />
The specified action for each event can be one of {{ic|ignore}}, {{ic|poweroff}}, {{ic|reboot}}, {{ic|halt}}, {{ic|suspend}}, {{ic|hibernate}}, {{ic|hybrid-sleep}}, {{ic|suspend-then-hibernate}}, {{ic|lock}} or {{ic|kexec}}. In case of hibernation and suspension, they must be properly [[Power management/Suspend and hibernate|set up]]. If an event is not configured, ''systemd'' will use a default action.<br />
<br />
{| class="wikitable sortable" border=1<br />
!Event handler<br />
!Description<br />
!Default action<br />
|-<br />
|{{ic|HandlePowerKey}}<br />
|Triggered when the power key/button is pressed.<br />
|{{ic|poweroff}}<br />
|-<br />
|{{ic|HandleSuspendKey}}<br />
|Triggered when the suspend key/button is pressed.<br />
|{{ic|suspend}}<br />
|-<br />
|{{ic|HandleHibernateKey}}<br />
|Triggered when the hibernate key/button is pressed.<br />
|{{ic|hibernate}}<br />
|-<br />
|{{ic|HandleLidSwitch}}<br />
|Triggered when the lid is closed, except in the cases below.<br />
|{{ic|suspend}}<br />
|-<br />
|{{ic|HandleLidSwitchDocked}}<br />
|Triggered when the lid is closed if the system is inserted in a docking station, or more than one display is connected.<br />
|{{ic|ignore}}<br />
|-<br />
|{{ic|HandleLidSwitchExternalPower}}<br />
|Triggered when the lid is closed if the system is connected to external power.<br />
|action set for {{ic|HandleLidSwitch}}<br />
|}<br />
<br />
To apply any changes, signal {{ic|systemd-logind}} with {{ic|HUP}}:<br />
<br />
# systemctl kill -s HUP systemd-logind<br />
<br />
{{Note|''systemd'' cannot handle AC and Battery ACPI events, so if you use [[Laptop Mode Tools]] or other similar tools [[acpid]] is still required.}}<br />
<br />
==== Power managers ====<br />
<br />
Some [[desktop environment]]s include power managers which [https://www.freedesktop.org/wiki/Software/systemd/inhibit/ inhibit] (temporarily turn off) some or all of the ''systemd'' ACPI settings. If such a power manager is running, then the actions for ACPI events can be configured in the power manager alone. Changes to {{ic|/etc/systemd/logind.conf}} or {{ic|/etc/systemd/logind.conf.d/*.conf}} need be made only if you wish to configure behaviour for a particular event that is not inhibited by the power manager. <br />
<br />
Note that if the power manager does not inhibit ''systemd'' for the appropriate events you can end up with a situation where ''systemd'' suspends your system and then when the system is woken up the other power manager suspends it again. As of December 2016, the power managers of [[KDE]], [[GNOME]], [[Xfce]] and [[MATE]] issue the necessary ''inhibited'' commands. If the ''inhibited'' commands are not being issued, such as when using [[acpid]] or others to handle ACPI events, set the {{ic|Handle}} options to {{ic|ignore}}. See also {{man|1|systemd-inhibit}}.<br />
<br />
==== xss-lock ====<br />
<br />
{{pkg|xss-lock}} subscribes to the systemd-events {{ic|suspend}}, {{ic|hibernate}}, {{ic|lock-session}}, and {{ic|unlock-session}} with appropriate actions (run locker and wait for user to unlock or kill locker). ''xss-lock'' also reacts to [[DPMS]] events and runs or kills the locker in response.<br />
<br />
Start xss-lock in your [[autostart]], for example<br />
<br />
xss-lock -- i3lock -n -i ''background_image.png'' &<br />
<br />
=== Suspend and hibernate ===<br />
<br />
''systemd'' provides commands to suspend to RAM or hibernate using the kernel's native suspend/resume functionality. There are also mechanisms to add hooks to customize pre- and post-suspend actions.<br />
<br />
{{ic|systemctl suspend}} should work out of the box, for {{ic|systemctl hibernate}} to work on your system you need to follow the instructions at [[Suspend and hibernate#Hibernation]].<br />
<br />
There are also two modes combining suspend and hibernate:<br />
<br />
* {{ic|systemctl hybrid-sleep}} suspends the system both to RAM and disk, so a complete power loss does not result in lost data. This mode is also called [[Power management/Suspend and hibernate|suspend to both]].<br />
* {{ic|systemctl suspend-then-hibernate}} initially suspends the system to RAM and if it is not interrupted within the delay specified by {{ic|HibernateDelaySec}} in {{man|5|systemd-sleep.conf}}, then the system will be woken using an RTC alarm and hibernated.<br />
<br />
{{Note|''systemd'' can also use other suspend backends (such as [[Uswsusp]]), in addition to the default ''kernel'' backend, in order to put the computer to sleep or hibernate. See [[Uswsusp#With systemd]] for an example.}}<br />
<br />
==== Hybrid-sleep on suspend or hibernation request ====<br />
<br />
It is possible to configure systemd to always do a ''hybrid-sleep'' even on a ''suspend'' or ''hibernation'' request.<br />
<br />
The default ''suspend'' and ''hibernation'' action can be configured in the {{ic|/etc/systemd/sleep.conf}} file. To set both actions to ''hybrid-sleep'':<br />
<br />
{{hc|/etc/systemd/sleep.conf|2=<br />
[Sleep]<br />
# suspend=hybrid-sleep<br />
SuspendMode=suspend<br />
SuspendState=disk<br />
# hibernate=hybrid-sleep<br />
HibernateMode=suspend<br />
HibernateState=disk<br />
}}<br />
<br />
See the {{man|5|sleep.conf.d}} manual page for details and the [https://www.kernel.org/doc/html/latest/admin-guide/pm/sleep-states.html#basic-sysfs-interfaces-for-system-suspend-and-hibernation linux kernel documentation on power states].<br />
<br />
=== Sleep hooks ===<br />
<br />
==== Suspend/resume service files ====<br />
<br />
Service files can be hooked into ''suspend.target'', ''hibernate.target'', ''sleep.target'', ''hybrid-sleep.target'' and ''suspend-then-hibernate.target'' to execute actions before or after suspend/hibernate. Separate files should be created for user actions and root/system actions. [[Enable]] the {{ic|suspend@''user''}} and {{ic|resume@''user''}} services to have them started at boot. Examples:<br />
<br />
{{hc|/etc/systemd/system/suspend@.service|2=<br />
[Unit]<br />
Description=User suspend actions<br />
Before=sleep.target<br />
<br />
[Service]<br />
User=%I<br />
Type=forking<br />
Environment=DISPLAY=:0<br />
ExecStartPre= -/usr/bin/pkill -u %u unison ; /usr/local/bin/music.sh stop<br />
ExecStart=/usr/bin/sflock<br />
ExecStartPost=/usr/bin/sleep 1<br />
<br />
[Install]<br />
WantedBy=sleep.target<br />
}}<br />
<br />
{{hc|/etc/systemd/system/resume@.service|2=<br />
[Unit]<br />
Description=User resume actions<br />
After=suspend.target<br />
<br />
[Service]<br />
User=%I<br />
Type=simple<br />
ExecStart=/usr/local/bin/ssh-connect.sh<br />
<br />
[Install]<br />
WantedBy=suspend.target<br />
}}<br />
<br />
{{Note|As screen lockers may return before the screen is "locked", the screen may flash on resuming from suspend. Adding a small delay via {{ic|1=ExecStartPost=/usr/bin/sleep 1}} helps prevent this.}}<br />
<br />
For root/system actions ([[enable]] the {{ic|root-resume}} and {{ic|root-suspend}} services to have them started at boot):<br />
<br />
{{hc|/etc/systemd/system/root-suspend.service|2=<br />
[Unit]<br />
Description=Local system suspend actions<br />
Before=sleep.target<br />
<br />
[Service]<br />
Type=simple<br />
ExecStart=-/usr/bin/pkill sshfs<br />
<br />
[Install]<br />
WantedBy=sleep.target<br />
}}<br />
<br />
{{hc|/etc/systemd/system/root-resume.service|2=<br />
[Unit]<br />
Description=Local system resume actions<br />
After=suspend.target<br />
<br />
[Service]<br />
Type=simple<br />
ExecStart=/usr/bin/systemctl restart mnt-media.automount<br />
<br />
[Install]<br />
WantedBy=suspend.target<br />
}}<br />
<br />
{{Tip|A couple of handy hints about these service files (more in {{man|5|systemd.service}}):<br />
<br />
* If {{ic|1=Type=oneshot}} then you can use multiple {{ic|1=ExecStart=}} lines. Otherwise only one {{ic|ExecStart}} line is allowed. You can add more commands with either {{ic|ExecStartPre}} or by separating commands with a semicolon (see the first example above; note the spaces before and after the semicolon, as they are ''required'').<br />
* A command prefixed with {{ic|-}} will cause a non-zero exit status to be ignored and treated as a successful command. <br />
* The best place to find errors when troubleshooting these service files is of course with [[journalctl]].<br />
}}<br />
<br />
==== Combined Suspend/resume service file ====<br />
<br />
With the combined suspend/resume service file, a single hook does all the work for different phases (sleep/resume) and for different targets (suspend/hibernate/hybrid-sleep).<br />
<br />
Example and explanation:<br />
<br />
{{hc|/etc/systemd/system/wicd-sleep.service|2=<br />
[Unit]<br />
Description=Wicd sleep hook<br />
Before=sleep.target<br />
StopWhenUnneeded=yes<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=-/usr/share/wicd/daemon/suspend.py<br />
ExecStop=-/usr/share/wicd/daemon/autoconnect.py<br />
<br />
[Install]<br />
WantedBy=sleep.target<br />
}}<br />
<br />
* {{ic|1=RemainAfterExit=yes}}: After started, the service is considered active until it is explicitly stopped.<br />
* {{ic|1=StopWhenUnneeded=yes}}: When active, the service will be stopped if no other active service requires it. In this specific example, it will be stopped after ''sleep.target'' is stopped.<br />
* Because ''sleep.target'' is pulled in by ''suspend.target'', ''hibernate.target'' and ''hybrid-sleep.target'' and because ''sleep.target'' itself is a ''StopWhenUnneeded'' service, the hook is guaranteed to start/stop properly for different tasks.<br />
<br />
===== Generic service template =====<br />
<br />
In this example, we create a [http://0pointer.net/blog/projects/instances.html template service] which we can then use to hook any existing systemd service to power events:[https://narkive.com/mYzxSIDN.6]<br />
<br />
{{hc|/etc/systemd/system/sleep@.service|2=<br />
[Unit]<br />
Description=%I sleep hook<br />
Before=sleep.target<br />
StopWhenUnneeded=yes<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=-/usr/bin/systemctl stop %i<br />
ExecStop=-/usr/bin/systemctl start %i<br />
<br />
[Install]<br />
WantedBy=sleep.target<br />
}}<br />
<br />
Then [[enable]] an instance of this template by specifying the basename of an existing systemd service after the {{ic|@}}, i.e., {{ic|sleep@'''''service-file-basename'''''.service}}. See {{man|5|systemd.unit|DESCRIPTION}} for more details on templates.<br />
<br />
{{Tip|Templates are not limited to systemd services and can be used with other programs/ See [https://fedoramagazine.org/systemd-template-unit-files/] for some examples.}}<br />
<br />
==== Hooks in /usr/lib/systemd/system-sleep ====<br />
<br />
''systemd'' runs all executables in {{ic|/usr/lib/systemd/system-sleep/}}, passing two arguments to each of them:<br />
<br />
* Argument 1: either {{ic|pre}} or {{ic|post}}, depending on whether the machine is going to sleep or waking up<br />
* Argument 2: {{ic|suspend}}, {{ic|hibernate}} or {{ic|hybrid-sleep}}, depending on which is being invoked<br />
<br />
''systemd'' will run these scripts concurrently and not one after another.<br />
<br />
The output of any custom script will be logged by ''systemd-suspend.service'', ''systemd-hibernate.service'' or ''systemd-hybrid-sleep.service''. You can see its output in ''systemd''<nowiki>'</nowiki>s [[journalctl]]:<br />
<br />
# journalctl -b -u systemd-suspend.service<br />
<br />
{{Note|You can also use ''sleep.target'', ''suspend.target'', ''hibernate.target'' or ''hybrid-sleep.target'' to hook units into the sleep state logic instead of using custom scripts.}}<br />
<br />
An example of a custom sleep script:<br />
<br />
{{hc|/usr/lib/systemd/system-sleep/example.sh|<br />
#!/bin/sh<br />
case $1/$2 in<br />
pre/*)<br />
echo "Going to $2..."<br />
;;<br />
post/*)<br />
echo "Waking up from $2..."<br />
;;<br />
esac<br />
}}<br />
<br />
Do not forget to make your script executable:<br />
<br />
# chmod a+x /usr/lib/systemd/system-sleep/example.sh<br />
<br />
See {{man|7|systemd.special}} and {{man|8|systemd-sleep}} for more details.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Delayed lid switch action ====<br />
<br />
When performing lid switches in short succession, ''logind'' will delay the suspend action for up to 90s to detect possible docks. [https://lists.freedesktop.org/archives/systemd-devel/2015-January/027131.html] This delay was made configurable with systemd v220:[https://github.com/systemd/systemd/commit/9d10cbee89ca7f82d29b9cb27bef11e23e3803ba]<br />
<br />
{{hc|/etc/systemd/logind.conf|2=<br />
...<br />
HoldoffTimeoutSec=30s<br />
...<br />
}}<br />
<br />
==== Suspend from corresponding laptop Fn key not working ====<br />
<br />
If, regardless of the setting in logind.conf, the sleep button does not work (pressing it does not even produce a message in syslog), then logind is probably not watching the keyboard device. [https://lists.freedesktop.org/archives/systemd-devel/2015-February/028325.html] Do:<br />
<br />
# journalctl --grep="Watching system buttons"<br />
<br />
You might see something like this:<br />
<br />
May 25 21:28:19 vmarch.lan systemd-logind[210]: Watching system buttons on /dev/input/event2 (Power Button)<br />
May 25 21:28:19 vmarch.lan systemd-logind[210]: Watching system buttons on /dev/input/event3 (Sleep Button)<br />
May 25 21:28:19 vmarch.lan systemd-logind[210]: Watching system buttons on /dev/input/event4 (Video Bus)<br />
<br />
Notice no keyboard device. Now obtain ATTRS{name} for the parent keyboard device [https://systemd-devel.freedesktop.narkive.com/Rbi3rjNN/patch-1-2-logind-add-support-for-tps65217-power-button] :<br />
<br />
{{hc|# udevadm info -a /dev/input/by-path/*-kbd|2=<br />
...<br />
KERNEL=="event0"<br />
...<br />
ATTRS{name}=="AT Translated Set 2 keyboard"<br />
}}<br />
<br />
Now write a custom udev rule to add the "power-switch" tag:<br />
<br />
{{hc|/etc/udev/rules.d/70-power-switch-my.rules|2=<br />
ACTION=="remove", GOTO="power_switch_my_end"<br />
SUBSYSTEM=="input", KERNEL=="event*", ATTRS{name}=="AT Translated Set 2 keyboard", TAG+="power-switch"<br />
LABEL="power_switch_my_end"<br />
}}<br />
<br />
[[Restart]] {{ic|systemd-udevd.service}}, reload rules by running {{ic|udevadm trigger}} as root, and [[restart]] {{ic|systemd-logind.service}}.<br />
<br />
Now you should see {{ic|Watching system buttons on /dev/input/event0}} in syslog.<br />
<br />
==== PC won't wake from sleep on B550I motherboards ====<br />
On some motherboards with B550i chipsets (ie, Gigabyte Technology Co., Ltd. Default string B550I AORUS PRO AX) the system won't completely enter sleep state and won't come out of it. Symptoms include the system entering sleep and the monitor turning off, but internal LEDs on the motherboard might stay on, or the power LED stays on. Subsequently, the system won't come back from this state and requires a hard power off. If you have similar issues with AMD, first make sure your system is fully updated and check AMD [[Microcode]] package is installed.<br />
<br />
Next, check the following:<br />
<br />
$ cat /proc/acpi/wakeup<br />
<br />
You will see something like this:<br />
<br />
Device S-state Status Sysfs node<br />
GP12 S4 *enabled pci:0000:00:07.1<br />
GP13 S4 *enabled pci:0000:00:08.1<br />
XHC0 S4 *enabled pci:0000:0b:00.3<br />
GP30 S4 *disabled<br />
GP31 S4 *disabled<br />
PS2K S3 *disabled<br />
GPP0 S4 *enabled pci:0000:00:01.1<br />
GPP8 S4 *enabled pci:0000:00:03.1<br />
PTXH S4 *enabled pci:0000:05:00.0<br />
PT20 S4 *disabled<br />
PT24 S4 *disabled<br />
PT26 S4 *disabled<br />
PT27 S4 *disabled<br />
PT28 S4 *enabled pci:0000:06:08.0<br />
PT29 S4 *enabled pci:0000:06:09.0<br />
<br />
Notice the line starting with {{ic|GPP0}}. If that is enabled you can run the following command:<br />
$ sudo /bin/sh -c '/bin/echo GPP0 > /proc/acpi/wakeup'<br />
<br />
Now test by running {{ic|systemctl suspend}} and let the system go to sleep. Then try to wake the system after a few seconds. If it works you can make the workaround permanent. Create a unit file:<br />
<br />
{{hc|/etc/systemd/system/toggle.ggp0.to.fix.suspend.issue.service|2=<br />
[Unit]<br />
Description="Disable GGP0 to fix suspend issue"<br />
<br />
[Service]<br />
ExecStart=/bin/sh -c "/bin/echo GPP0 > /proc/acpi/wakeup"<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
[[Reload]] systemd manager and [[Enable]] and [[start]] the newly created unit.<br />
<br />
== Power saving ==<br />
<br />
{{Note|See [[Laptop#Power management]] for power management specific to laptops, such as battery monitoring. See also pages specific to your CPU and GPU (e.g., [[Ryzen]], [[AMDGPU]]).}}<br />
<br />
This section is a reference for creating custom scripts and power saving settings such as by udev rules. Make sure that the settings are not managed by some [[#Userspace tools|other utility]] to avoid conflicts.<br />
<br />
Almost all of the features listed here are worth using whether or not the computer is on AC or battery power. Most have negligible performance impact and are just not enabled by default because of commonly broken hardware/drivers. Reducing power usage means reducing heat, which can even lead to higher performance on a modern Intel or AMD CPU, thanks to [[Wikipedia:Intel Turbo Boost|dynamic overclocking]].<br />
<br />
=== Processors with Intel HWP (Intel Hardware P-state) support ===<br />
<br />
{{Merge|CPU frequency scaling|More context in the main article.}}<br />
<br />
The available energy preferences of a HWP supported processor are {{ic|default performance balance_performance balance_power power}}.<br />
<br />
This can be validated by running<br />
<br />
$ cat /sys/devices/system/cpu/cpufreq/policy?/energy_performance_available_preferences<br />
<br />
To conserve more energy, you can configuration by creating the following file:<br />
<br />
{{hc|/etc/tmpfiles.d/energy_performance_preference.conf|<br />
w /sys/devices/system/cpu/cpufreq/policy?/energy_performance_preference - - - - balance_power<br />
}}<br />
<br />
See the {{man|8|systemd-tmpfiles}} and {{man|5|tmpfiles.d}} man pages for details.<br />
<br />
=== Audio ===<br />
<br />
==== Kernel ====<br />
<br />
By default, audio power saving is turned off by most drivers. It can be enabled by setting the {{ic|power_save}} parameter; a time (in seconds) to go into idle mode. To idle the audio card after one second, create the following file for Intel soundcards.<br />
<br />
{{hc|/etc/modprobe.d/audio_powersave.conf|2=<br />
options snd_hda_intel power_save=1<br />
}}<br />
<br />
Alternatively, use the following for ac97:<br />
<br />
options snd_ac97_codec power_save=1<br />
<br />
{{Note|<br />
* To retrieve the manufacturer and the corresponding kernel driver which is used for your sound card, run {{ic|lspci -k}}.<br />
* Toggling the audio card's power state can cause a popping sound or noticeable latency on some broken hardware.<br />
}}<br />
<br />
It is also possible to further reduce the audio power requirements by disabling the HDMI audio output, which can done by [[blacklisting]] the appropriate kernel modules (e.g. {{ic|snd_hda_codec_hdmi}} in case of Intel hardware).<br />
<br />
==== PulseAudio ====<br />
<br />
By default, PulseAudio suspends any audio sources that have become idle for too long. When using an external USB microphone, recordings may start with a pop sound. As a workaround, comment out the following line in {{ic|/etc/pulse/default.pa}}:<br />
<br />
load-module module-suspend-on-idle<br />
<br />
Afterwards, restart PulseAudio with {{ic|systemctl restart --user pulseaudio}}.<br />
<br />
=== Backlight ===<br />
<br />
See [[Backlight]].<br />
<br />
=== Bluetooth ===<br />
<br />
{{expansion|reason=The device should likely be disabled with hciconfig first.}}<br />
<br />
To disable bluetooth completely, [[blacklist]] the {{ic|btusb}} and {{ic|bluetooth}} modules.<br />
<br />
To turn off bluetooth only temporarily, use ''rfkill'':<br />
<br />
# rfkill block bluetooth<br />
<br />
Or with udev rule:<br />
<br />
{{hc|/etc/udev/rules.d/50-bluetooth.rules|2=<br />
# disable bluetooth<br />
SUBSYSTEM=="rfkill", ATTR{type}=="bluetooth", ATTR{state}="0"<br />
}}<br />
<br />
=== Web camera ===<br />
<br />
If you will not use integrated web camera then [[blacklist]] the {{ic|uvcvideo}} module.<br />
<br />
=== Kernel parameters ===<br />
<br />
This section uses configurations in {{ic|/etc/sysctl.d/}}, which is ''"a drop-in directory for kernel sysctl parameters."'' See [http://0pointer.de/blog/projects/the-new-configuration-files The New Configuration Files] and more specifically {{man|5|sysctl.d}} for more information.<br />
<br />
==== Disabling NMI watchdog ====<br />
<br />
{{Expansion|This or {{ic|nowatchdog}} as can be seen in [[Improving performance#Watchdogs]]}}<br />
<br />
The [[Wikipedia:Non-maskable interrupt|NMI]] watchdog is a debugging feature to catch hardware hangs that cause a kernel panic. On some systems it can generate a lot of interrupts, causing a noticeable increase in power usage:<br />
<br />
{{hc|/etc/sysctl.d/disable_watchdog.conf|2=<br />
kernel.nmi_watchdog = 0<br />
}}<br />
<br />
or add {{ic|1=nmi_watchdog=0}} to the [[kernel line]] to disable it completely from early boot.<br />
<br />
==== Writeback Time ====<br />
<br />
Increasing the virtual memory dirty writeback time helps to aggregate disk I/O together, thus reducing spanned disk writes, and increasing power saving. To set the value to 60 seconds (default is 5 seconds):<br />
<br />
{{hc|/etc/sysctl.d/dirty.conf|2=<br />
vm.dirty_writeback_centisecs = 6000<br />
}}<br />
<br />
To do the same for journal commits on supported filesystems (e.g. ext4, btrfs...), use {{ic|1=commit=60}} as a option in [[fstab]].<br />
<br />
Note that this value is modified as a side effect of the Laptop Mode setting below. See also [[sysctl#Virtual memory]] for other parameters affecting I/O performance and power saving.<br />
<br />
==== Laptop Mode ====<br />
<br />
See the [https://www.kernel.org/doc/html/latest/admin-guide/laptops/laptop-mode.html kernel documentation] on the laptop mode "knob". "A sensible value for the knob is 5 seconds."<br />
<br />
{{hc|/etc/sysctl.d/laptop.conf|2=<br />
vm.laptop_mode = 5<br />
}}<br />
<br />
{{Note|This setting is mainly relevant to spinning-disk drives.}}<br />
<br />
=== Network interfaces ===<br />
<br />
[[Wake-on-LAN]] can be a useful feature, but if you are not making use of it then it is simply draining extra power waiting for a magic packet while in suspend. You can adapt the [[Wake-on-LAN#udev]] rule to disable the feature for all ethernet interfaces. To enable powersaving with {{Pkg|iw}} on all wireless interfaces:<br />
<br />
{{hc|/etc/udev/rules.d/'''81'''-wifi-powersave.rules|2=<br />
ACTION=="add", SUBSYSTEM=="net", KERNEL=="wl*", RUN+="/usr/bin/iw dev $name set power_save on"<br />
}}<br />
<br />
The name of the configuration file is important. With the use of [[Network configuration#Change interface name|persistent device names]] in systemd, the above network rule, named lexicographically '''after''' {{ic|80-net-setup-link.rules}}, is applied after the device is renamed with a persistent name e.g. {{ic|wlan0}} renamed {{ic|wlp3s0}}. Be aware that the {{ic|RUN}} command is executed after all rules have been processed and must anyway use the persistent name, available in {{ic|$name}} for the matched device.<br />
<br />
==== Intel wireless cards (iwlwifi) ====<br />
<br />
Additional power saving functions of Intel wireless cards with {{ic|iwlwifi}} driver can be enabled by passing the correct parameters to the kernel module. Making them persistent can be achieved by adding the lines below to the {{ic|/etc/modprobe.d/iwlwifi.conf}} file:<br />
<br />
options iwlwifi power_save=1<br />
<br />
This option will probably increase your median latency:<br />
<br />
options iwlwifi uapsd_disable=0<br />
<br />
On kernels < 5.4 you can use this option, but it will probably decrease your maximum throughput:<br />
<br />
options iwlwifi d0i3_disable=0<br />
<br />
Depending on your wireless card one of these two options will apply.<br />
<br />
options iwlmvm power_scheme=3<br />
<br />
options iwldvm force_cam=0<br />
<br />
You can check which one is relevant by checking which of these modules is running using<br />
<br />
# lsmod | grep '^iwl.vm'<br />
<br />
Keep in mind that these power saving options are experimental and can cause an unstable system.<br />
<br />
=== Bus power management ===<br />
<br />
==== Active State Power Management ====<br />
<br />
If the computer is believed not to support [[Wikipedia:Active State Power Management|ASPM]] it will be disabled on boot:<br />
<br />
# lspci -vv | grep 'ASPM.*abled;'<br />
<br />
ASPM is handled by the BIOS, if ASPM is disabled it will be because [https://wireless.wiki.kernel.org/en/users/documentation/ASPM]:<br />
<br />
# The BIOS disabled it for some reason (for conflicts?).<br />
# PCIE requires ASPM but L0s are optional (so L0s might be disabled and only L1 enabled).<br />
# The BIOS might not have been programmed for it.<br />
# The BIOS is buggy.<br />
<br />
If believing the computer has support for ASPM it can be forced on for the kernel to handle with the {{ic|1=pcie_aspm=force}} [[kernel parameter]].<br />
<br />
{{Warning|<br />
* Forcing on ASPM can cause a freeze/panic, so make sure you have a way to undo the option if it does not work.<br />
* On systems that do not support it forcing on ASPM can even increase power consumption.<br />
* This forces ASPM in kernel while it can still remain disabled in hardware and not work. To check whether this is the case, run {{ic|dmesg {{!}} grep ASPM}} as root. If so, consult the Wiki article specific to your hardware.<br />
}}<br />
<br />
To adjust to {{ic|powersave}} do (the following command will not work unless enabled):<br />
<br />
# echo powersave > /sys/module/pcie_aspm/parameters/policy<br />
<br />
By default it looks like this:<br />
<br />
{{hc|$ cat /sys/module/pcie_aspm/parameters/policy|<br />
[default] performance powersave powersupersave<br />
}}<br />
<br />
==== PCI Runtime Power Management ====<br />
<br />
{{hc|/etc/udev/rules.d/pci_pm.rules|2=<br />
SUBSYSTEM=="pci", ATTR{power/control}="auto"<br />
}}<br />
<br />
The rule above powers all unused devices down, but some devices will not wake up again. To allow runtime power management only for devices that are known to work, use simple matching against vendor and device IDs (use {{ic|lspci -nn}} to get these values):<br />
<br />
{{hc|/etc/udev/rules.d/pci_pm.rules|2=<br />
# whitelist for pci autosuspend<br />
SUBSYSTEM=="pci", ATTR{vendor}=="0x1234", ATTR{device}=="0x1234", ATTR{power/control}="auto"<br />
}}<br />
<br />
Alternatively, to blacklist devices that are not working with PCI runtime power management and enable it for all other devices:<br />
<br />
{{hc|/etc/udev/rules.d/pci_pm.rules|2=<br />
# blacklist for pci runtime power management<br />
SUBSYSTEM=="pci", ATTR{vendor}=="0x1234", ATTR{device}=="0x1234", ATTR{power/control}="on", GOTO="pci_pm_end"<br />
<br />
SUBSYSTEM=="pci", ATTR{power/control}="auto"<br />
LABEL="pci_pm_end"<br />
}}<br />
<br />
==== USB autosuspend ====<br />
<br />
The Linux kernel can automatically suspend USB devices when they are not in use. This can sometimes save quite a bit of power, however some USB devices are not compatible with USB power saving and start to misbehave (common for USB mice/keyboards). [[udev]] rules based on whitelist or blacklist filtering can help to mitigate the problem.<br />
<br />
The most simple and likely useless example is enabling autosuspend for all USB devices:<br />
<br />
{{hc|/etc/udev/rules.d/50-usb_power_save.rules|2=<br />
ACTION=="add", SUBSYSTEM=="usb", TEST=="power/control", ATTR{power/control}="auto"<br />
}}<br />
<br />
To allow autosuspend only for devices that are known to work, use simple matching against vendor and product IDs (use ''lsusb'' to get these values):<br />
<br />
{{hc|/etc/udev/rules.d/50-usb_power_save.rules|2=<br />
# whitelist for usb autosuspend<br />
ACTION=="add", SUBSYSTEM=="usb", TEST=="power/control", ATTR{idVendor}=="05c6", ATTR{idProduct}=="9205", ATTR{power/control}="auto"<br />
}}<br />
<br />
Alternatively, to blacklist devices that are not working with USB autosuspend and enable it for all other devices:<br />
<br />
{{hc|/etc/udev/rules.d/50-usb_power_save.rules|2=<br />
# blacklist for usb autosuspend<br />
ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="05c6", ATTR{idProduct}=="9205", GOTO="power_usb_rules_end"<br />
<br />
ACTION=="add", SUBSYSTEM=="usb", TEST=="power/control", ATTR{power/control}="auto"<br />
LABEL="power_usb_rules_end"<br />
}}<br />
<br />
The default autosuspend idle delay time is controlled by the {{ic|autosuspend}} parameter of the {{ic|usbcore}} built-in [[kernel module]]. To set the delay to 5 seconds instead of the default 2 seconds, add the following [[kernel parameter]] for your bootloader.<br />
<br />
{{bc|1=usbcore.autosuspend=5}}<br />
<br />
Similarly to {{ic|power/control}}, the delay time can be fine-tuned per device by setting the {{ic|power/autosuspend}} attribute. This means, alternatively, autosuspend can be disabled by setting {{ic|power/autosuspend}} to -1 (i.e., never autosuspend):<br />
<br />
{{hc|/etc/udev/rules.d/50-usb_power_save.rules|2=<br />
ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="05c6", ATTR{idProduct}=="9205", ATTR{power/autosuspend}="-1"<br />
}}<br />
<br />
See the [https://www.kernel.org/doc/html/latest/driver-api/usb/power-management.html Linux kernel documentation] for more information on USB power management.<br />
<br />
==== SATA Active Link Power Management ====<br />
<br />
{{Warning|SATA Active Link Power Management can lead to data loss on some devices. Do not enable this setting unless you have frequent backups.}}<br />
<br />
{{Out of date|Phrases like "new setting" and "will become a default setting" are outdated. Also should be more formal. See [[Help:Style#Language register]].}}<br />
<br />
Since Linux 4.15 there is a [https://hansdegoede.livejournal.com/18412.html new setting] called {{ic|med_power_with_dipm}} that matches the behaviour of Windows IRST driver settings and should not cause data loss with recent SSD/HDD drives. The power saving can be significant, ranging [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ebb82e3c79d2a956366d0848304a53648bd6350b from 1.0 to 1.5 Watts (when idle)]. It will become a default setting for Intel based laptops in Linux 4.16 [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ebb82e3c79d2a956366d0848304a53648bd6350b].<br />
<br />
The current setting can be read from {{ic|/sys/class/scsi_host/host*/link_power_management_policy}} as follows:<br />
<br />
$ cat /sys/class/scsi_host/host*/link_power_management_policy<br />
<br />
{| class="wikitable"<br />
|+ Available ALPM settings<br />
! Setting<br />
! Description<br />
! Power saving<br />
|-<br />
| max_performance<br />
| current default<br />
| None<br />
|-<br />
| medium_power<br />
| -<br />
| ~1.0 Watts<br />
|-<br />
| med_power_with_dipm<br />
| recommended setting<br />
| ~1.5 Watts<br />
|-<br />
| min_power<br />
| '''WARNING: possible data loss'''<br />
| ~1.5 Watts<br />
|}<br />
<br />
{{hc|/etc/udev/rules.d/hd_power_save.rules|2=<br />
ACTION=="add", SUBSYSTEM=="scsi_host", KERNEL=="host*", ATTR{link_power_management_policy}="med_power_with_dipm"<br />
}}<br />
<br />
{{Note|This adds latency when accessing a drive that has been idle, so it is one of the few settings that may be worth toggling based on whether you are on AC power.}}<br />
<br />
=== Hard disk drive ===<br />
<br />
See [[hdparm#Power management configuration]] for drive parameters that can be set.<br />
<br />
Power saving is not effective when too many programs are frequently writing to the disk. Tracking all programs, and how and when they write to disk is the way to limit disk usage. Use {{Pkg|iotop}} to see which programs use the disk frequently. See [[Improving performance#Storage devices]] for other tips.<br />
<br />
Also little things like setting the [[Fstab#atime options|noatime]] option can help. If enough RAM is available, consider disabling or limiting [[swappiness]] as it has the possibility to limit a good number of disk writes.<br />
<br />
=== CD-ROM or DVD drive ===<br />
<br />
See [[Udisks#Devices do not remain unmounted (udisks)]].<br />
<br />
== Tools and scripts ==<br />
<br />
{{Style|Merged from [[Power saving]], needs reorganization to fit into this page.}}<br />
<br />
=== Using a script and an udev rule ===<br />
<br />
Since systemd users can suspend and hibernate through {{ic|systemctl suspend}} or {{ic|systemctl hibernate}} and handle acpi events with {{ic|/etc/systemd/logind.conf}}, it might be interesting to remove ''pm-utils'' and [[acpid]]. There is just one thing systemd cannot do (as of systemd-204): power management depending on whether the system is running on AC or battery. To fill this gap, you can create a single [[udev]] rule that runs a script when the AC adapter is plugged and unplugged:<br />
<br />
{{hc|/etc/udev/rules.d/powersave.rules|2=<br />
SUBSYSTEM=="power_supply", ATTR{online}=="0", RUN+="/path/to/your/script true"<br />
SUBSYSTEM=="power_supply", ATTR{online}=="1", RUN+="/path/to/your/script false"<br />
}}<br />
<br />
{{Note|You can use the same script that ''pm-powersave'' uses. You just have to make it executable and place it somewhere else (for example {{ic|/usr/local/bin/}}).}}<br />
<br />
Examples of powersave scripts:<br />
<br />
* [https://github.com/supplantr/ftw ftw], package: {{AUR|ftw-git}}<br />
* [https://github.com/Unia/powersave powersave]<br />
* [https://github.com/quequotion/pantheon-bzr-qq/blob/master/EXTRAS/indicator-powersave/throttle throttle], from {{AUR|indicator-powersave}}<br />
<br />
The above udev rule should work as expected, but if your power settings are not updated after a suspend or hibernate cycle, you should add a script in {{ic|/usr/lib/systemd/system-sleep/}} with the following contents:<br />
<br />
{{hc|/usr/lib/systemd/system-sleep/00powersave|<br />
#!/bin/sh<br />
<br />
case $1 in<br />
pre) /path/to/your/script false ;;<br />
post) <br />
if cat /sys/class/power_supply/AC0/online {{!}} grep 0 > /dev/null 2>&1<br />
then<br />
/path/to/your/script true <br />
else<br />
/path/to/your/script false<br />
fi<br />
;;<br />
esac<br />
exit 0<br />
}}<br />
<br />
Do not forget to make it executable!<br />
<br />
{{Note|Be aware that AC0 may be different for your laptop, change it if that is the case.}}<br />
<br />
=== Print power settings ===<br />
<br />
This script prints power settings and a variety of other properties for USB and PCI devices. Note that root permissions are needed to see all settings.<br />
<br />
{{bc|1=<br />
#!/bin/bash<br />
<br />
for i in $(find /sys/devices -name "bMaxPower")<br />
do<br />
busdir=${i%/*}<br />
busnum=$(<$busdir/busnum)<br />
devnum=$(<$busdir/devnum)<br />
title=$(lsusb -s $busnum:$devnum)<br />
<br />
printf "\n\n+++ %s\n -%s\n" "$title" "$busdir"<br />
<br />
for ff in $(find $busdir/power -type f ! -empty 2>/dev/null)<br />
do<br />
v=$(cat $ff 2>/dev/null{{!}}tr -d "\n")<br />
[[ ${#v} -gt 0 ]] && echo -e " ${ff##*/}=$v";<br />
v=;<br />
done {{!}} sort -g;<br />
done;<br />
<br />
printf "\n\n\n+++ %s\n" "Kernel Modules"<br />
for mod in $(lspci -k {{!}} sed -n '/in use:/s,^.*: ,,p' {{!}} sort -u)<br />
do<br />
echo "+ $mod";<br />
systool -v -m $mod 2> /dev/null {{!}} sed -n "/Parameters:/,/^$/p";<br />
done<br />
}}<br />
<br />
== See also ==<br />
<br />
* [https://www.thinkwiki.org/wiki/How_to_reduce_power_consumption ThinkWiki:How to reduce power consumption]<br />
* [https://ivanvojtko.blogspot.sk/2016/04/how-to-get-longer-battery-life-on-linux.html How to get longer battery life on Linux]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=Power_management&diff=704663Power management2021-12-06T16:27:02Z<p>Mouseman: /* Troubleshooting */ added troubleshooting wake from power on b550i systems</p>
<hr />
<div>[[Category:Power management]]<br />
[[es:Power management]]<br />
[[ja:電源管理]]<br />
[[zh-hans:Power management]]<br />
{{Related articles start}}<br />
{{Related|Power management/Suspend and hibernate}}<br />
{{Related|Display Power Management Signaling}}<br />
{{Related|CPU frequency scaling}}<br />
{{Related|Hybrid graphics}}<br />
{{Related|Kernel modules}}<br />
{{Related|sysctl}}<br />
{{Related|udev}}<br />
{{Related articles end}}<br />
[[Wikipedia:Power management|Power management]] is a feature that turns off the power or switches system's components to a low-power state when inactive.<br />
<br />
In Arch Linux, power management consists of two main parts:<br />
<br />
# Configuration of the Linux kernel, which interacts with the hardware.<br />
#* [[Kernel parameters]]<br />
#* [[Kernel modules]]<br />
#* [[udev]] rules<br />
# Configuration of userspace tools, which interact with the kernel and react to its events. Many userspace tools also allow to modify kernel configuration in a "user-friendly" way. See [[#Userspace tools]] for the options.<br />
<br />
== Userspace tools ==<br />
<br />
Using these tools can replace setting a lot of settings by hand. Only run '''one''' of these tools to avoid possible conflicts as they all work more or less similarly. Have a look at the [[:Category:Power management|power management category]] to get an overview on what power management options exist in Arch Linux.<br />
<br />
These are the more popular scripts and tools designed to help power saving:<br />
<br />
=== Console ===<br />
<br />
* {{App|[[acpid]]| A daemon for delivering ACPI power management events with netlink support.|https://sourceforge.net/projects/acpid2/|{{Pkg|acpid}}}}<br />
* {{App|[[Laptop Mode Tools]]|Utility to configure laptop power saving settings, considered by many to be the de facto utility for power saving though may take a bit of configuration.|https://github.com/rickysarraf/laptop-mode-tools|{{AUR|laptop-mode-tools}}}}<br />
* {{App|libsmbios|Library and tools for interacting with Dell SMBIOS tables.|https://github.com/dell/libsmbios|{{Pkg|libsmbios}}}}<br />
* {{App|[[powertop]]|A tool to diagnose issues with power consumption and power management to help set power saving settings.|https://01.org/powertop/|{{Pkg|powertop}}}}<br />
* {{App|[[systemd]]|A system and service manager.|https://freedesktop.org/wiki/Software/systemd/|{{Pkg|systemd}}}}<br />
* {{App|[[TLP]]|Advanced power management for Linux.|https://linrunner.de/tlp|{{Pkg|tlp}}}}<br />
<br />
=== Graphical ===<br />
<br />
* {{App|batterymon-clone|Simple battery monitor tray icon.|https://github.com/jareksed/batterymon-clone|{{AUR|batterymon-clone}}}}<br />
* {{App|batsignal|Lightweight battery monitor that uses libnotify to warn of low battery levels.|https://github.com/electrickite/batsignal|{{AUR|batsignal}}}}<br />
* {{App|cbatticon|Lightweight and fast battery icon that sits in your system tray.|https://github.com/valr/cbatticon|{{Pkg|cbatticon}}}}<br />
* {{App|GNOME Power Statistics|System power information and statistics for GNOME.|https://gitlab.gnome.org/GNOME/gnome-power-manager|{{Pkg|gnome-power-manager}}}}<br />
* {{App|KDE Power Devil|Power management module for Plasma.|https://invent.kde.org/plasma/powerdevil|{{Pkg|powerdevil}}}}<br />
* {{App|LXQt Power Management|Power management module for LXQt.|https://github.com/lxqt/lxqt-powermanagement|{{Pkg|lxqt-powermanagement}}}}<br />
* {{App|MATE Power Management|Power management tool for MATE.|https://github.com/mate-desktop/mate-power-manager|{{Pkg|mate-power-manager}}}}<br />
* {{App|MATE Power Statistics|System power information and statistics for MATE.|https://github.com/mate-desktop/mate-power-manager|{{Pkg|mate-power-manager}}}}<br />
* {{App|powerkit|Desktop independent power manager.|https://github.com/rodlie/powerkit|{{AUR|powerkit}}}}<br />
* {{App|Xfce Power Manager|Power manager for Xfce.|https://docs.xfce.org/xfce/xfce4-power-manager/start|{{Pkg|xfce4-power-manager}}}}<br />
* {{App|vattery|Battery monitoring application written in Vala that will display the status of a laptop battery in a system tray.|https://www.jezra.net/projects/vattery.html|{{AUR|vattery}}}}<br />
<br />
== Power management with systemd ==<br />
<br />
=== ACPI events ===<br />
<br />
''systemd'' handles some power-related [[Wikipedia:Advanced_Configuration_and_Power_Interface|ACPI]] events, whose actions can be configured in {{ic|/etc/systemd/logind.conf}} or {{ic|/etc/systemd/logind.conf.d/*.conf}} — see {{man|5|logind.conf}}. On systems with no dedicated power manager, this may replace the [[acpid]] daemon which is usually used to react to these ACPI events.<br />
<br />
The specified action for each event can be one of {{ic|ignore}}, {{ic|poweroff}}, {{ic|reboot}}, {{ic|halt}}, {{ic|suspend}}, {{ic|hibernate}}, {{ic|hybrid-sleep}}, {{ic|suspend-then-hibernate}}, {{ic|lock}} or {{ic|kexec}}. In case of hibernation and suspension, they must be properly [[Power management/Suspend and hibernate|set up]]. If an event is not configured, ''systemd'' will use a default action.<br />
<br />
{| class="wikitable sortable" border=1<br />
!Event handler<br />
!Description<br />
!Default action<br />
|-<br />
|{{ic|HandlePowerKey}}<br />
|Triggered when the power key/button is pressed.<br />
|{{ic|poweroff}}<br />
|-<br />
|{{ic|HandleSuspendKey}}<br />
|Triggered when the suspend key/button is pressed.<br />
|{{ic|suspend}}<br />
|-<br />
|{{ic|HandleHibernateKey}}<br />
|Triggered when the hibernate key/button is pressed.<br />
|{{ic|hibernate}}<br />
|-<br />
|{{ic|HandleLidSwitch}}<br />
|Triggered when the lid is closed, except in the cases below.<br />
|{{ic|suspend}}<br />
|-<br />
|{{ic|HandleLidSwitchDocked}}<br />
|Triggered when the lid is closed if the system is inserted in a docking station, or more than one display is connected.<br />
|{{ic|ignore}}<br />
|-<br />
|{{ic|HandleLidSwitchExternalPower}}<br />
|Triggered when the lid is closed if the system is connected to external power.<br />
|action set for {{ic|HandleLidSwitch}}<br />
|}<br />
<br />
To apply any changes, signal {{ic|systemd-logind}} with {{ic|HUP}}:<br />
<br />
# systemctl kill -s HUP systemd-logind<br />
<br />
{{Note|''systemd'' cannot handle AC and Battery ACPI events, so if you use [[Laptop Mode Tools]] or other similar tools [[acpid]] is still required.}}<br />
<br />
==== Power managers ====<br />
<br />
Some [[desktop environment]]s include power managers which [https://www.freedesktop.org/wiki/Software/systemd/inhibit/ inhibit] (temporarily turn off) some or all of the ''systemd'' ACPI settings. If such a power manager is running, then the actions for ACPI events can be configured in the power manager alone. Changes to {{ic|/etc/systemd/logind.conf}} or {{ic|/etc/systemd/logind.conf.d/*.conf}} need be made only if you wish to configure behaviour for a particular event that is not inhibited by the power manager. <br />
<br />
Note that if the power manager does not inhibit ''systemd'' for the appropriate events you can end up with a situation where ''systemd'' suspends your system and then when the system is woken up the other power manager suspends it again. As of December 2016, the power managers of [[KDE]], [[GNOME]], [[Xfce]] and [[MATE]] issue the necessary ''inhibited'' commands. If the ''inhibited'' commands are not being issued, such as when using [[acpid]] or others to handle ACPI events, set the {{ic|Handle}} options to {{ic|ignore}}. See also {{man|1|systemd-inhibit}}.<br />
<br />
==== xss-lock ====<br />
<br />
{{pkg|xss-lock}} subscribes to the systemd-events {{ic|suspend}}, {{ic|hibernate}}, {{ic|lock-session}}, and {{ic|unlock-session}} with appropriate actions (run locker and wait for user to unlock or kill locker). ''xss-lock'' also reacts to [[DPMS]] events and runs or kills the locker in response.<br />
<br />
Start xss-lock in your [[autostart]], for example<br />
<br />
xss-lock -- i3lock -n -i ''background_image.png'' &<br />
<br />
=== Suspend and hibernate ===<br />
<br />
''systemd'' provides commands to suspend to RAM or hibernate using the kernel's native suspend/resume functionality. There are also mechanisms to add hooks to customize pre- and post-suspend actions.<br />
<br />
{{ic|systemctl suspend}} should work out of the box, for {{ic|systemctl hibernate}} to work on your system you need to follow the instructions at [[Suspend and hibernate#Hibernation]].<br />
<br />
There are also two modes combining suspend and hibernate:<br />
<br />
* {{ic|systemctl hybrid-sleep}} suspends the system both to RAM and disk, so a complete power loss does not result in lost data. This mode is also called [[Power management/Suspend and hibernate|suspend to both]].<br />
* {{ic|systemctl suspend-then-hibernate}} initially suspends the system to RAM and if it is not interrupted within the delay specified by {{ic|HibernateDelaySec}} in {{man|5|systemd-sleep.conf}}, then the system will be woken using an RTC alarm and hibernated.<br />
<br />
{{Note|''systemd'' can also use other suspend backends (such as [[Uswsusp]]), in addition to the default ''kernel'' backend, in order to put the computer to sleep or hibernate. See [[Uswsusp#With systemd]] for an example.}}<br />
<br />
==== Hybrid-sleep on suspend or hibernation request ====<br />
<br />
It is possible to configure systemd to always do a ''hybrid-sleep'' even on a ''suspend'' or ''hibernation'' request.<br />
<br />
The default ''suspend'' and ''hibernation'' action can be configured in the {{ic|/etc/systemd/sleep.conf}} file. To set both actions to ''hybrid-sleep'':<br />
<br />
{{hc|/etc/systemd/sleep.conf|2=<br />
[Sleep]<br />
# suspend=hybrid-sleep<br />
SuspendMode=suspend<br />
SuspendState=disk<br />
# hibernate=hybrid-sleep<br />
HibernateMode=suspend<br />
HibernateState=disk<br />
}}<br />
<br />
See the {{man|5|sleep.conf.d}} manual page for details and the [https://www.kernel.org/doc/html/latest/admin-guide/pm/sleep-states.html#basic-sysfs-interfaces-for-system-suspend-and-hibernation linux kernel documentation on power states].<br />
<br />
=== Sleep hooks ===<br />
<br />
==== Suspend/resume service files ====<br />
<br />
Service files can be hooked into ''suspend.target'', ''hibernate.target'', ''sleep.target'', ''hybrid-sleep.target'' and ''suspend-then-hibernate.target'' to execute actions before or after suspend/hibernate. Separate files should be created for user actions and root/system actions. [[Enable]] the {{ic|suspend@''user''}} and {{ic|resume@''user''}} services to have them started at boot. Examples:<br />
<br />
{{hc|/etc/systemd/system/suspend@.service|2=<br />
[Unit]<br />
Description=User suspend actions<br />
Before=sleep.target<br />
<br />
[Service]<br />
User=%I<br />
Type=forking<br />
Environment=DISPLAY=:0<br />
ExecStartPre= -/usr/bin/pkill -u %u unison ; /usr/local/bin/music.sh stop<br />
ExecStart=/usr/bin/sflock<br />
ExecStartPost=/usr/bin/sleep 1<br />
<br />
[Install]<br />
WantedBy=sleep.target<br />
}}<br />
<br />
{{hc|/etc/systemd/system/resume@.service|2=<br />
[Unit]<br />
Description=User resume actions<br />
After=suspend.target<br />
<br />
[Service]<br />
User=%I<br />
Type=simple<br />
ExecStart=/usr/local/bin/ssh-connect.sh<br />
<br />
[Install]<br />
WantedBy=suspend.target<br />
}}<br />
<br />
{{Note|As screen lockers may return before the screen is "locked", the screen may flash on resuming from suspend. Adding a small delay via {{ic|1=ExecStartPost=/usr/bin/sleep 1}} helps prevent this.}}<br />
<br />
For root/system actions ([[enable]] the {{ic|root-resume}} and {{ic|root-suspend}} services to have them started at boot):<br />
<br />
{{hc|/etc/systemd/system/root-suspend.service|2=<br />
[Unit]<br />
Description=Local system suspend actions<br />
Before=sleep.target<br />
<br />
[Service]<br />
Type=simple<br />
ExecStart=-/usr/bin/pkill sshfs<br />
<br />
[Install]<br />
WantedBy=sleep.target<br />
}}<br />
<br />
{{hc|/etc/systemd/system/root-resume.service|2=<br />
[Unit]<br />
Description=Local system resume actions<br />
After=suspend.target<br />
<br />
[Service]<br />
Type=simple<br />
ExecStart=/usr/bin/systemctl restart mnt-media.automount<br />
<br />
[Install]<br />
WantedBy=suspend.target<br />
}}<br />
<br />
{{Tip|A couple of handy hints about these service files (more in {{man|5|systemd.service}}):<br />
<br />
* If {{ic|1=Type=oneshot}} then you can use multiple {{ic|1=ExecStart=}} lines. Otherwise only one {{ic|ExecStart}} line is allowed. You can add more commands with either {{ic|ExecStartPre}} or by separating commands with a semicolon (see the first example above; note the spaces before and after the semicolon, as they are ''required'').<br />
* A command prefixed with {{ic|-}} will cause a non-zero exit status to be ignored and treated as a successful command. <br />
* The best place to find errors when troubleshooting these service files is of course with [[journalctl]].<br />
}}<br />
<br />
==== Combined Suspend/resume service file ====<br />
<br />
With the combined suspend/resume service file, a single hook does all the work for different phases (sleep/resume) and for different targets (suspend/hibernate/hybrid-sleep).<br />
<br />
Example and explanation:<br />
<br />
{{hc|/etc/systemd/system/wicd-sleep.service|2=<br />
[Unit]<br />
Description=Wicd sleep hook<br />
Before=sleep.target<br />
StopWhenUnneeded=yes<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=-/usr/share/wicd/daemon/suspend.py<br />
ExecStop=-/usr/share/wicd/daemon/autoconnect.py<br />
<br />
[Install]<br />
WantedBy=sleep.target<br />
}}<br />
<br />
* {{ic|1=RemainAfterExit=yes}}: After started, the service is considered active until it is explicitly stopped.<br />
* {{ic|1=StopWhenUnneeded=yes}}: When active, the service will be stopped if no other active service requires it. In this specific example, it will be stopped after ''sleep.target'' is stopped.<br />
* Because ''sleep.target'' is pulled in by ''suspend.target'', ''hibernate.target'' and ''hybrid-sleep.target'' and because ''sleep.target'' itself is a ''StopWhenUnneeded'' service, the hook is guaranteed to start/stop properly for different tasks.<br />
<br />
===== Generic service template =====<br />
<br />
In this example, we create a [http://0pointer.net/blog/projects/instances.html template service] which we can then use to hook any existing systemd service to power events:[https://narkive.com/mYzxSIDN.6]<br />
<br />
{{hc|/etc/systemd/system/sleep@.service|2=<br />
[Unit]<br />
Description=%I sleep hook<br />
Before=sleep.target<br />
StopWhenUnneeded=yes<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=-/usr/bin/systemctl stop %i<br />
ExecStop=-/usr/bin/systemctl start %i<br />
<br />
[Install]<br />
WantedBy=sleep.target<br />
}}<br />
<br />
Then [[enable]] an instance of this template by specifying the basename of an existing systemd service after the {{ic|@}}, i.e., {{ic|sleep@'''''service-file-basename'''''.service}}. See {{man|5|systemd.unit|DESCRIPTION}} for more details on templates.<br />
<br />
{{Tip|Templates are not limited to systemd services and can be used with other programs/ See [https://fedoramagazine.org/systemd-template-unit-files/] for some examples.}}<br />
<br />
==== Hooks in /usr/lib/systemd/system-sleep ====<br />
<br />
''systemd'' runs all executables in {{ic|/usr/lib/systemd/system-sleep/}}, passing two arguments to each of them:<br />
<br />
* Argument 1: either {{ic|pre}} or {{ic|post}}, depending on whether the machine is going to sleep or waking up<br />
* Argument 2: {{ic|suspend}}, {{ic|hibernate}} or {{ic|hybrid-sleep}}, depending on which is being invoked<br />
<br />
''systemd'' will run these scripts concurrently and not one after another.<br />
<br />
The output of any custom script will be logged by ''systemd-suspend.service'', ''systemd-hibernate.service'' or ''systemd-hybrid-sleep.service''. You can see its output in ''systemd''<nowiki>'</nowiki>s [[journalctl]]:<br />
<br />
# journalctl -b -u systemd-suspend.service<br />
<br />
{{Note|You can also use ''sleep.target'', ''suspend.target'', ''hibernate.target'' or ''hybrid-sleep.target'' to hook units into the sleep state logic instead of using custom scripts.}}<br />
<br />
An example of a custom sleep script:<br />
<br />
{{hc|/usr/lib/systemd/system-sleep/example.sh|<br />
#!/bin/sh<br />
case $1/$2 in<br />
pre/*)<br />
echo "Going to $2..."<br />
;;<br />
post/*)<br />
echo "Waking up from $2..."<br />
;;<br />
esac<br />
}}<br />
<br />
Do not forget to make your script executable:<br />
<br />
# chmod a+x /usr/lib/systemd/system-sleep/example.sh<br />
<br />
See {{man|7|systemd.special}} and {{man|8|systemd-sleep}} for more details.<br />
<br />
=== Troubleshooting ===<br />
<br />
==== Delayed lid switch action ====<br />
<br />
When performing lid switches in short succession, ''logind'' will delay the suspend action for up to 90s to detect possible docks. [https://lists.freedesktop.org/archives/systemd-devel/2015-January/027131.html] This delay was made configurable with systemd v220:[https://github.com/systemd/systemd/commit/9d10cbee89ca7f82d29b9cb27bef11e23e3803ba]<br />
<br />
{{hc|/etc/systemd/logind.conf|2=<br />
...<br />
HoldoffTimeoutSec=30s<br />
...<br />
}}<br />
<br />
==== Suspend from corresponding laptop Fn key not working ====<br />
<br />
If, regardless of the setting in logind.conf, the sleep button does not work (pressing it does not even produce a message in syslog), then logind is probably not watching the keyboard device. [https://lists.freedesktop.org/archives/systemd-devel/2015-February/028325.html] Do:<br />
<br />
# journalctl --grep="Watching system buttons"<br />
<br />
You might see something like this:<br />
<br />
May 25 21:28:19 vmarch.lan systemd-logind[210]: Watching system buttons on /dev/input/event2 (Power Button)<br />
May 25 21:28:19 vmarch.lan systemd-logind[210]: Watching system buttons on /dev/input/event3 (Sleep Button)<br />
May 25 21:28:19 vmarch.lan systemd-logind[210]: Watching system buttons on /dev/input/event4 (Video Bus)<br />
<br />
Notice no keyboard device. Now obtain ATTRS{name} for the parent keyboard device [https://systemd-devel.freedesktop.narkive.com/Rbi3rjNN/patch-1-2-logind-add-support-for-tps65217-power-button] :<br />
<br />
{{hc|# udevadm info -a /dev/input/by-path/*-kbd|2=<br />
...<br />
KERNEL=="event0"<br />
...<br />
ATTRS{name}=="AT Translated Set 2 keyboard"<br />
}}<br />
<br />
Now write a custom udev rule to add the "power-switch" tag:<br />
<br />
{{hc|/etc/udev/rules.d/70-power-switch-my.rules|2=<br />
ACTION=="remove", GOTO="power_switch_my_end"<br />
SUBSYSTEM=="input", KERNEL=="event*", ATTRS{name}=="AT Translated Set 2 keyboard", TAG+="power-switch"<br />
LABEL="power_switch_my_end"<br />
}}<br />
<br />
[[Restart]] {{ic|systemd-udevd.service}}, reload rules by running {{ic|udevadm trigger}} as root, and [[restart]] {{ic|systemd-logind.service}}.<br />
<br />
Now you should see {{ic|Watching system buttons on /dev/input/event0}} in syslog.<br />
<br />
==== PC won't wake from sleep on B550I motherboards ====<br />
On some motherboards with B550i chipsets (ie, Gigabyte Technology Co., Ltd. Default string B550I AORUS PRO AX) the system won't completely enter sleep state and won't come out of it. Symptoms include the system entering sleep and the monitor turning off, but internal LEDs on the motherboard might stay on, or the power LED stays on. Subsequently, the system won't come back from this state and requires a hard power off. If you have similar issues with AMD, first make sure your system is fully updated and check AMD [[micocode]] package is installed.<br />
<br />
Next, check the following:<br />
<br />
$ cat /proc/acpi/wakeup<br />
<br />
You will see something like this:<br />
<br />
Device S-state Status Sysfs node<br />
GP12 S4 *enabled pci:0000:00:07.1<br />
GP13 S4 *enabled pci:0000:00:08.1<br />
XHC0 S4 *enabled pci:0000:0b:00.3<br />
GP30 S4 *disabled<br />
GP31 S4 *disabled<br />
PS2K S3 *disabled<br />
GPP0 S4 *enabled pci:0000:00:01.1<br />
GPP8 S4 *enabled pci:0000:00:03.1<br />
PTXH S4 *enabled pci:0000:05:00.0<br />
PT20 S4 *disabled<br />
PT24 S4 *disabled<br />
PT26 S4 *disabled<br />
PT27 S4 *disabled<br />
PT28 S4 *enabled pci:0000:06:08.0<br />
PT29 S4 *enabled pci:0000:06:09.0<br />
<br />
Notice the line starting with {{ic|GPP0}}. If that is enabled you can run the following command:<br />
$ sudo /bin/sh -c '/bin/echo GPP0 > /proc/acpi/wakeup'<br />
<br />
Now test by running {{ic|systemctl suspend}} and let the system go to sleep. Then try to wake the system after a few seconds. If it works you can make the workaround permanent. Create a unit file:<br />
<br />
{{hc|/etc/systemd/system/toggle.ggp0.to.fix.suspend.issue.service|2=<br />
[Unit]<br />
Description="Disable GGP0 to fix suspend issue"<br />
<br />
[Service]<br />
ExecStart=/bin/sh -c "/bin/echo GPP0 > /proc/acpi/wakeup"<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
[[Reload]] systemd manager and [[Enable]] and [[start]] the newly created unit.<br />
<br />
== Power saving ==<br />
<br />
{{Note|See [[Laptop#Power management]] for power management specific to laptops, such as battery monitoring. See also pages specific to your CPU and GPU (e.g., [[Ryzen]], [[AMDGPU]]).}}<br />
<br />
This section is a reference for creating custom scripts and power saving settings such as by udev rules. Make sure that the settings are not managed by some [[#Userspace tools|other utility]] to avoid conflicts.<br />
<br />
Almost all of the features listed here are worth using whether or not the computer is on AC or battery power. Most have negligible performance impact and are just not enabled by default because of commonly broken hardware/drivers. Reducing power usage means reducing heat, which can even lead to higher performance on a modern Intel or AMD CPU, thanks to [[Wikipedia:Intel Turbo Boost|dynamic overclocking]].<br />
<br />
=== Processors with Intel HWP (Intel Hardware P-state) support ===<br />
<br />
{{Merge|CPU frequency scaling|More context in the main article.}}<br />
<br />
The available energy preferences of a HWP supported processor are {{ic|default performance balance_performance balance_power power}}.<br />
<br />
This can be validated by running<br />
<br />
$ cat /sys/devices/system/cpu/cpufreq/policy?/energy_performance_available_preferences<br />
<br />
To conserve more energy, you can configuration by creating the following file:<br />
<br />
{{hc|/etc/tmpfiles.d/energy_performance_preference.conf|<br />
w /sys/devices/system/cpu/cpufreq/policy?/energy_performance_preference - - - - balance_power<br />
}}<br />
<br />
See the {{man|8|systemd-tmpfiles}} and {{man|5|tmpfiles.d}} man pages for details.<br />
<br />
=== Audio ===<br />
<br />
==== Kernel ====<br />
<br />
By default, audio power saving is turned off by most drivers. It can be enabled by setting the {{ic|power_save}} parameter; a time (in seconds) to go into idle mode. To idle the audio card after one second, create the following file for Intel soundcards.<br />
<br />
{{hc|/etc/modprobe.d/audio_powersave.conf|2=<br />
options snd_hda_intel power_save=1<br />
}}<br />
<br />
Alternatively, use the following for ac97:<br />
<br />
options snd_ac97_codec power_save=1<br />
<br />
{{Note|<br />
* To retrieve the manufacturer and the corresponding kernel driver which is used for your sound card, run {{ic|lspci -k}}.<br />
* Toggling the audio card's power state can cause a popping sound or noticeable latency on some broken hardware.<br />
}}<br />
<br />
It is also possible to further reduce the audio power requirements by disabling the HDMI audio output, which can done by [[blacklisting]] the appropriate kernel modules (e.g. {{ic|snd_hda_codec_hdmi}} in case of Intel hardware).<br />
<br />
==== PulseAudio ====<br />
<br />
By default, PulseAudio suspends any audio sources that have become idle for too long. When using an external USB microphone, recordings may start with a pop sound. As a workaround, comment out the following line in {{ic|/etc/pulse/default.pa}}:<br />
<br />
load-module module-suspend-on-idle<br />
<br />
Afterwards, restart PulseAudio with {{ic|systemctl restart --user pulseaudio}}.<br />
<br />
=== Backlight ===<br />
<br />
See [[Backlight]].<br />
<br />
=== Bluetooth ===<br />
<br />
{{expansion|reason=The device should likely be disabled with hciconfig first.}}<br />
<br />
To disable bluetooth completely, [[blacklist]] the {{ic|btusb}} and {{ic|bluetooth}} modules.<br />
<br />
To turn off bluetooth only temporarily, use ''rfkill'':<br />
<br />
# rfkill block bluetooth<br />
<br />
Or with udev rule:<br />
<br />
{{hc|/etc/udev/rules.d/50-bluetooth.rules|2=<br />
# disable bluetooth<br />
SUBSYSTEM=="rfkill", ATTR{type}=="bluetooth", ATTR{state}="0"<br />
}}<br />
<br />
=== Web camera ===<br />
<br />
If you will not use integrated web camera then [[blacklist]] the {{ic|uvcvideo}} module.<br />
<br />
=== Kernel parameters ===<br />
<br />
This section uses configurations in {{ic|/etc/sysctl.d/}}, which is ''"a drop-in directory for kernel sysctl parameters."'' See [http://0pointer.de/blog/projects/the-new-configuration-files The New Configuration Files] and more specifically {{man|5|sysctl.d}} for more information.<br />
<br />
==== Disabling NMI watchdog ====<br />
<br />
{{Expansion|This or {{ic|nowatchdog}} as can be seen in [[Improving performance#Watchdogs]]}}<br />
<br />
The [[Wikipedia:Non-maskable interrupt|NMI]] watchdog is a debugging feature to catch hardware hangs that cause a kernel panic. On some systems it can generate a lot of interrupts, causing a noticeable increase in power usage:<br />
<br />
{{hc|/etc/sysctl.d/disable_watchdog.conf|2=<br />
kernel.nmi_watchdog = 0<br />
}}<br />
<br />
or add {{ic|1=nmi_watchdog=0}} to the [[kernel line]] to disable it completely from early boot.<br />
<br />
==== Writeback Time ====<br />
<br />
Increasing the virtual memory dirty writeback time helps to aggregate disk I/O together, thus reducing spanned disk writes, and increasing power saving. To set the value to 60 seconds (default is 5 seconds):<br />
<br />
{{hc|/etc/sysctl.d/dirty.conf|2=<br />
vm.dirty_writeback_centisecs = 6000<br />
}}<br />
<br />
To do the same for journal commits on supported filesystems (e.g. ext4, btrfs...), use {{ic|1=commit=60}} as a option in [[fstab]].<br />
<br />
Note that this value is modified as a side effect of the Laptop Mode setting below. See also [[sysctl#Virtual memory]] for other parameters affecting I/O performance and power saving.<br />
<br />
==== Laptop Mode ====<br />
<br />
See the [https://www.kernel.org/doc/html/latest/admin-guide/laptops/laptop-mode.html kernel documentation] on the laptop mode "knob". "A sensible value for the knob is 5 seconds."<br />
<br />
{{hc|/etc/sysctl.d/laptop.conf|2=<br />
vm.laptop_mode = 5<br />
}}<br />
<br />
{{Note|This setting is mainly relevant to spinning-disk drives.}}<br />
<br />
=== Network interfaces ===<br />
<br />
[[Wake-on-LAN]] can be a useful feature, but if you are not making use of it then it is simply draining extra power waiting for a magic packet while in suspend. You can adapt the [[Wake-on-LAN#udev]] rule to disable the feature for all ethernet interfaces. To enable powersaving with {{Pkg|iw}} on all wireless interfaces:<br />
<br />
{{hc|/etc/udev/rules.d/'''81'''-wifi-powersave.rules|2=<br />
ACTION=="add", SUBSYSTEM=="net", KERNEL=="wl*", RUN+="/usr/bin/iw dev $name set power_save on"<br />
}}<br />
<br />
The name of the configuration file is important. With the use of [[Network configuration#Change interface name|persistent device names]] in systemd, the above network rule, named lexicographically '''after''' {{ic|80-net-setup-link.rules}}, is applied after the device is renamed with a persistent name e.g. {{ic|wlan0}} renamed {{ic|wlp3s0}}. Be aware that the {{ic|RUN}} command is executed after all rules have been processed and must anyway use the persistent name, available in {{ic|$name}} for the matched device.<br />
<br />
==== Intel wireless cards (iwlwifi) ====<br />
<br />
Additional power saving functions of Intel wireless cards with {{ic|iwlwifi}} driver can be enabled by passing the correct parameters to the kernel module. Making them persistent can be achieved by adding the lines below to the {{ic|/etc/modprobe.d/iwlwifi.conf}} file:<br />
<br />
options iwlwifi power_save=1<br />
<br />
This option will probably increase your median latency:<br />
<br />
options iwlwifi uapsd_disable=0<br />
<br />
On kernels < 5.4 you can use this option, but it will probably decrease your maximum throughput:<br />
<br />
options iwlwifi d0i3_disable=0<br />
<br />
Depending on your wireless card one of these two options will apply.<br />
<br />
options iwlmvm power_scheme=3<br />
<br />
options iwldvm force_cam=0<br />
<br />
You can check which one is relevant by checking which of these modules is running using<br />
<br />
# lsmod | grep '^iwl.vm'<br />
<br />
Keep in mind that these power saving options are experimental and can cause an unstable system.<br />
<br />
=== Bus power management ===<br />
<br />
==== Active State Power Management ====<br />
<br />
If the computer is believed not to support [[Wikipedia:Active State Power Management|ASPM]] it will be disabled on boot:<br />
<br />
# lspci -vv | grep 'ASPM.*abled;'<br />
<br />
ASPM is handled by the BIOS, if ASPM is disabled it will be because [https://wireless.wiki.kernel.org/en/users/documentation/ASPM]:<br />
<br />
# The BIOS disabled it for some reason (for conflicts?).<br />
# PCIE requires ASPM but L0s are optional (so L0s might be disabled and only L1 enabled).<br />
# The BIOS might not have been programmed for it.<br />
# The BIOS is buggy.<br />
<br />
If believing the computer has support for ASPM it can be forced on for the kernel to handle with the {{ic|1=pcie_aspm=force}} [[kernel parameter]].<br />
<br />
{{Warning|<br />
* Forcing on ASPM can cause a freeze/panic, so make sure you have a way to undo the option if it does not work.<br />
* On systems that do not support it forcing on ASPM can even increase power consumption.<br />
* This forces ASPM in kernel while it can still remain disabled in hardware and not work. To check whether this is the case, run {{ic|dmesg {{!}} grep ASPM}} as root. If so, consult the Wiki article specific to your hardware.<br />
}}<br />
<br />
To adjust to {{ic|powersave}} do (the following command will not work unless enabled):<br />
<br />
# echo powersave > /sys/module/pcie_aspm/parameters/policy<br />
<br />
By default it looks like this:<br />
<br />
{{hc|$ cat /sys/module/pcie_aspm/parameters/policy|<br />
[default] performance powersave powersupersave<br />
}}<br />
<br />
==== PCI Runtime Power Management ====<br />
<br />
{{hc|/etc/udev/rules.d/pci_pm.rules|2=<br />
SUBSYSTEM=="pci", ATTR{power/control}="auto"<br />
}}<br />
<br />
The rule above powers all unused devices down, but some devices will not wake up again. To allow runtime power management only for devices that are known to work, use simple matching against vendor and device IDs (use {{ic|lspci -nn}} to get these values):<br />
<br />
{{hc|/etc/udev/rules.d/pci_pm.rules|2=<br />
# whitelist for pci autosuspend<br />
SUBSYSTEM=="pci", ATTR{vendor}=="0x1234", ATTR{device}=="0x1234", ATTR{power/control}="auto"<br />
}}<br />
<br />
Alternatively, to blacklist devices that are not working with PCI runtime power management and enable it for all other devices:<br />
<br />
{{hc|/etc/udev/rules.d/pci_pm.rules|2=<br />
# blacklist for pci runtime power management<br />
SUBSYSTEM=="pci", ATTR{vendor}=="0x1234", ATTR{device}=="0x1234", ATTR{power/control}="on", GOTO="pci_pm_end"<br />
<br />
SUBSYSTEM=="pci", ATTR{power/control}="auto"<br />
LABEL="pci_pm_end"<br />
}}<br />
<br />
==== USB autosuspend ====<br />
<br />
The Linux kernel can automatically suspend USB devices when they are not in use. This can sometimes save quite a bit of power, however some USB devices are not compatible with USB power saving and start to misbehave (common for USB mice/keyboards). [[udev]] rules based on whitelist or blacklist filtering can help to mitigate the problem.<br />
<br />
The most simple and likely useless example is enabling autosuspend for all USB devices:<br />
<br />
{{hc|/etc/udev/rules.d/50-usb_power_save.rules|2=<br />
ACTION=="add", SUBSYSTEM=="usb", TEST=="power/control", ATTR{power/control}="auto"<br />
}}<br />
<br />
To allow autosuspend only for devices that are known to work, use simple matching against vendor and product IDs (use ''lsusb'' to get these values):<br />
<br />
{{hc|/etc/udev/rules.d/50-usb_power_save.rules|2=<br />
# whitelist for usb autosuspend<br />
ACTION=="add", SUBSYSTEM=="usb", TEST=="power/control", ATTR{idVendor}=="05c6", ATTR{idProduct}=="9205", ATTR{power/control}="auto"<br />
}}<br />
<br />
Alternatively, to blacklist devices that are not working with USB autosuspend and enable it for all other devices:<br />
<br />
{{hc|/etc/udev/rules.d/50-usb_power_save.rules|2=<br />
# blacklist for usb autosuspend<br />
ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="05c6", ATTR{idProduct}=="9205", GOTO="power_usb_rules_end"<br />
<br />
ACTION=="add", SUBSYSTEM=="usb", TEST=="power/control", ATTR{power/control}="auto"<br />
LABEL="power_usb_rules_end"<br />
}}<br />
<br />
The default autosuspend idle delay time is controlled by the {{ic|autosuspend}} parameter of the {{ic|usbcore}} built-in [[kernel module]]. To set the delay to 5 seconds instead of the default 2 seconds, add the following [[kernel parameter]] for your bootloader.<br />
<br />
{{bc|1=usbcore.autosuspend=5}}<br />
<br />
Similarly to {{ic|power/control}}, the delay time can be fine-tuned per device by setting the {{ic|power/autosuspend}} attribute. This means, alternatively, autosuspend can be disabled by setting {{ic|power/autosuspend}} to -1 (i.e., never autosuspend):<br />
<br />
{{hc|/etc/udev/rules.d/50-usb_power_save.rules|2=<br />
ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="05c6", ATTR{idProduct}=="9205", ATTR{power/autosuspend}="-1"<br />
}}<br />
<br />
See the [https://www.kernel.org/doc/html/latest/driver-api/usb/power-management.html Linux kernel documentation] for more information on USB power management.<br />
<br />
==== SATA Active Link Power Management ====<br />
<br />
{{Warning|SATA Active Link Power Management can lead to data loss on some devices. Do not enable this setting unless you have frequent backups.}}<br />
<br />
{{Out of date|Phrases like "new setting" and "will become a default setting" are outdated. Also should be more formal. See [[Help:Style#Language register]].}}<br />
<br />
Since Linux 4.15 there is a [https://hansdegoede.livejournal.com/18412.html new setting] called {{ic|med_power_with_dipm}} that matches the behaviour of Windows IRST driver settings and should not cause data loss with recent SSD/HDD drives. The power saving can be significant, ranging [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ebb82e3c79d2a956366d0848304a53648bd6350b from 1.0 to 1.5 Watts (when idle)]. It will become a default setting for Intel based laptops in Linux 4.16 [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ebb82e3c79d2a956366d0848304a53648bd6350b].<br />
<br />
The current setting can be read from {{ic|/sys/class/scsi_host/host*/link_power_management_policy}} as follows:<br />
<br />
$ cat /sys/class/scsi_host/host*/link_power_management_policy<br />
<br />
{| class="wikitable"<br />
|+ Available ALPM settings<br />
! Setting<br />
! Description<br />
! Power saving<br />
|-<br />
| max_performance<br />
| current default<br />
| None<br />
|-<br />
| medium_power<br />
| -<br />
| ~1.0 Watts<br />
|-<br />
| med_power_with_dipm<br />
| recommended setting<br />
| ~1.5 Watts<br />
|-<br />
| min_power<br />
| '''WARNING: possible data loss'''<br />
| ~1.5 Watts<br />
|}<br />
<br />
{{hc|/etc/udev/rules.d/hd_power_save.rules|2=<br />
ACTION=="add", SUBSYSTEM=="scsi_host", KERNEL=="host*", ATTR{link_power_management_policy}="med_power_with_dipm"<br />
}}<br />
<br />
{{Note|This adds latency when accessing a drive that has been idle, so it is one of the few settings that may be worth toggling based on whether you are on AC power.}}<br />
<br />
=== Hard disk drive ===<br />
<br />
See [[hdparm#Power management configuration]] for drive parameters that can be set.<br />
<br />
Power saving is not effective when too many programs are frequently writing to the disk. Tracking all programs, and how and when they write to disk is the way to limit disk usage. Use {{Pkg|iotop}} to see which programs use the disk frequently. See [[Improving performance#Storage devices]] for other tips.<br />
<br />
Also little things like setting the [[Fstab#atime options|noatime]] option can help. If enough RAM is available, consider disabling or limiting [[swappiness]] as it has the possibility to limit a good number of disk writes.<br />
<br />
=== CD-ROM or DVD drive ===<br />
<br />
See [[Udisks#Devices do not remain unmounted (udisks)]].<br />
<br />
== Tools and scripts ==<br />
<br />
{{Style|Merged from [[Power saving]], needs reorganization to fit into this page.}}<br />
<br />
=== Using a script and an udev rule ===<br />
<br />
Since systemd users can suspend and hibernate through {{ic|systemctl suspend}} or {{ic|systemctl hibernate}} and handle acpi events with {{ic|/etc/systemd/logind.conf}}, it might be interesting to remove ''pm-utils'' and [[acpid]]. There is just one thing systemd cannot do (as of systemd-204): power management depending on whether the system is running on AC or battery. To fill this gap, you can create a single [[udev]] rule that runs a script when the AC adapter is plugged and unplugged:<br />
<br />
{{hc|/etc/udev/rules.d/powersave.rules|2=<br />
SUBSYSTEM=="power_supply", ATTR{online}=="0", RUN+="/path/to/your/script true"<br />
SUBSYSTEM=="power_supply", ATTR{online}=="1", RUN+="/path/to/your/script false"<br />
}}<br />
<br />
{{Note|You can use the same script that ''pm-powersave'' uses. You just have to make it executable and place it somewhere else (for example {{ic|/usr/local/bin/}}).}}<br />
<br />
Examples of powersave scripts:<br />
<br />
* [https://github.com/supplantr/ftw ftw], package: {{AUR|ftw-git}}<br />
* [https://github.com/Unia/powersave powersave]<br />
* [https://github.com/quequotion/pantheon-bzr-qq/blob/master/EXTRAS/indicator-powersave/throttle throttle], from {{AUR|indicator-powersave}}<br />
<br />
The above udev rule should work as expected, but if your power settings are not updated after a suspend or hibernate cycle, you should add a script in {{ic|/usr/lib/systemd/system-sleep/}} with the following contents:<br />
<br />
{{hc|/usr/lib/systemd/system-sleep/00powersave|<br />
#!/bin/sh<br />
<br />
case $1 in<br />
pre) /path/to/your/script false ;;<br />
post) <br />
if cat /sys/class/power_supply/AC0/online {{!}} grep 0 > /dev/null 2>&1<br />
then<br />
/path/to/your/script true <br />
else<br />
/path/to/your/script false<br />
fi<br />
;;<br />
esac<br />
exit 0<br />
}}<br />
<br />
Do not forget to make it executable!<br />
<br />
{{Note|Be aware that AC0 may be different for your laptop, change it if that is the case.}}<br />
<br />
=== Print power settings ===<br />
<br />
This script prints power settings and a variety of other properties for USB and PCI devices. Note that root permissions are needed to see all settings.<br />
<br />
{{bc|1=<br />
#!/bin/bash<br />
<br />
for i in $(find /sys/devices -name "bMaxPower")<br />
do<br />
busdir=${i%/*}<br />
busnum=$(<$busdir/busnum)<br />
devnum=$(<$busdir/devnum)<br />
title=$(lsusb -s $busnum:$devnum)<br />
<br />
printf "\n\n+++ %s\n -%s\n" "$title" "$busdir"<br />
<br />
for ff in $(find $busdir/power -type f ! -empty 2>/dev/null)<br />
do<br />
v=$(cat $ff 2>/dev/null{{!}}tr -d "\n")<br />
[[ ${#v} -gt 0 ]] && echo -e " ${ff##*/}=$v";<br />
v=;<br />
done {{!}} sort -g;<br />
done;<br />
<br />
printf "\n\n\n+++ %s\n" "Kernel Modules"<br />
for mod in $(lspci -k {{!}} sed -n '/in use:/s,^.*: ,,p' {{!}} sort -u)<br />
do<br />
echo "+ $mod";<br />
systool -v -m $mod 2> /dev/null {{!}} sed -n "/Parameters:/,/^$/p";<br />
done<br />
}}<br />
<br />
== See also ==<br />
<br />
* [https://www.thinkwiki.org/wiki/How_to_reduce_power_consumption ThinkWiki:How to reduce power consumption]<br />
* [https://ivanvojtko.blogspot.sk/2016/04/how-to-get-longer-battery-life-on-linux.html How to get longer battery life on Linux]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=User:Mouseman&diff=574487User:Mouseman2019-06-03T12:46:24Z<p>Mouseman: </p>
<hr />
<div>The 'Talk/Discussion' page is not meant to personally ask me questions. Please use the Arch Linux BBS / Forums for that. Thank you.</div>Mousemanhttps://wiki.archlinux.org/index.php?title=User_talk:Mouseman&diff=574486User talk:Mouseman2019-06-03T12:46:13Z<p>Mouseman: </p>
<hr />
<div>[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 13:18, 1 July 2017 (UTC)<br />
The 'Talk/Discussion' page is not meant to personally ask me questions. Please use the Arch Linux BBS / Forums for that. Thank you.<br />
[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 12:44, 3 June 2019 (UTC)</div>Mousemanhttps://wiki.archlinux.org/index.php?title=User:Mouseman&diff=574485User:Mouseman2019-06-03T12:45:45Z<p>Mouseman: </p>
<hr />
<div>The 'Talk' page is not meant to personally ask me questions. Please use the Arch Linux BBS / Forums for that. Thank you.</div>Mousemanhttps://wiki.archlinux.org/index.php?title=User_talk:Mouseman&diff=574484User talk:Mouseman2019-06-03T12:44:20Z<p>Mouseman: </p>
<hr />
<div>[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 13:18, 1 July 2017 (UTC)<br />
The 'Talk' page is not meant to personally ask me questions. Please use the Arch Linux BBS / Forums for that. Thank you.<br />
[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 12:44, 3 June 2019 (UTC)</div>Mousemanhttps://wiki.archlinux.org/index.php?title=User_talk:Mouseman&diff=574483User talk:Mouseman2019-06-03T12:40:29Z<p>Mouseman: Please use the forums for personal questions or when you seek advice about certain topics such as whether to use ZFS or not.</p>
<hr />
<div>[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 13:18, 1 July 2017 (UTC)</div>Mousemanhttps://wiki.archlinux.org/index.php?title=ZFS&diff=557066ZFS2018-11-24T18:11:54Z<p>Mouseman: /* Scrub */ some explanation of what scrub is</p>
<hr />
<div>[[Category:File systems]]<br />
[[Category:Oracle]]<br />
[[ja:ZFS]]<br />
[[ru:ZFS]]<br />
[[zh-hans:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|ZFS/Virtual disks}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. <br />
<br />
Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum 256 Quadrillion [[Wikipedia:Zettabyte|Zettabytes]] storage with no limit on number of filesystems (datasets) or files[http://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html]. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
{{Note|Due to potential legal incompatibilities between CDDL license of ZFS code and GPL of the Linux kernel ([https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/ ],[[wikipedia:Common_Development_and_Distribution_License#GPL_compatibility|CDDL-GPL]],[[wikipedia:ZFS#Linux|ZFS in Linux]]) - ZFS development is not supported by the kernel.<br />
<br />
As a result:<br />
* ZFSonLinux project must keep up with Linux kernel versions. After making stable ZFSonLinux release - Arch ZFS maintainers release them.<br />
* This situation sometimes locks down the normal rolling update process by unsatisfied dependencies because the new kernel version, proposed by update, is unsupported by ZFSonLinux.}}<br />
<br />
== Installation ==<br />
=== General ===<br />
<br />
{{warning|Unless you use the [[dkms]] versions of these packages, the ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#archzfs|archzfs]] repository.}}<br />
<br />
{{Tip| You can [[downgrade]] your linux version to the one from [[Unofficial user repositories#archzfs|archzfs]] repo if your current kernel is newer.}}<br />
<br />
Install from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
* {{AUR|zfs-linux}} for [http://zfsonlinux.org/ stable] releases.<br />
* {{AUR|zfs-linux-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases (with support of newer kernel versions).<br />
* {{AUR|zfs-linux-lts}} for stable releases for LTS kernels.<br />
* {{AUR|zfs-linux-lts-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for LTS kernels.<br />
* {{AUR|zfs-linux-hardened}} for stable releases for hardened kernels.<br />
* {{AUR|zfs-linux-hardened-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for hardened kernels.<br />
* {{AUR|zfs-linux-zen}} for stable releases for zen kernels.<br />
* {{AUR|zfs-linux-zen-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for zen kernels.<br />
* {{AUR|zfs-dkms}} for versions with dynamic kernel module support.<br />
* {{AUR|zfs-dkms-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for versions with dynamic kernel module support.<br />
<br />
These branches have (according to them) dependencies on the {{ic|zfs-utils}}, {{ic|spl}}, {{ic|spl-utils}} packages. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-linux}} or {{AUR|zfs-dkms}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
=== DKMS ===<br />
<br />
Users can make use of DKMS [[Dynamic Kernel Module Support]] to rebuild the ZFS modules automatically with every kernel upgrade. <br />
<br />
Install {{AUR|zfs-dkms}} or {{AUR|zfs-dkms-git}} and apply the post-install instructions given by these packages.<br />
<br />
{{Tip|Add an {{ic|IgnorePkg}} entry to [[pacman.conf]] to prevent these packages from upgrading when doing a regular update.}}<br />
<br />
== Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
== Configuration ==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
=== Automatic Start ===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
{{Note|Beginning with ZOL version 0.6.5.8 the ZFS service unit files have been changed so that you need to explicitly enable any ZFS services you want to run.<br />
<br />
See [https://github.com/archzfs/archzfs/issues/72 https://github.com/archzfs/archzfs/issues/72] for more information.<br />
<br />
}}<br />
<br />
In order to mount zfs pools automatically on boot you need to enable the following services and targets:<br />
<br />
# systemctl enable zfs-import-cache<br />
# systemctl enable zfs-mount<br />
# systemctl enable zfs-import.target<br />
<br />
or, as explained on [https://github.com/archzfs/archzfs/issues/72 the GitHub issue], use the [https://www.freedesktop.org/software/systemd/man/systemd.preset.html systemd preset file]:<br />
<br />
# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)<br />
<br />
== Creating a storage pool ==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare the devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [https://github.com/zfsonlinux/zfs/wiki/faq#performance-considerations 1.10 What’s going on with performance?], [https://github.com/zfsonlinux/zfs/wiki/faq#advanced-format-disks 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the IDs of the drives to add to the zpool. The [https://github.com/zfsonlinux/zfs/wiki/faq#selecting-dev-names-when-creating-a-pool zfs on Linux developers recommend] using device IDs when creating ZFS storage pools of less than 10 devices. To find the IDs, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The IDs should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Warning|If you create zpools using device names (e.g. /dev/sda,/dev/sdb,...) ZFS might not be able to detect zpools intermittently on boot.}}<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> [raidz(2|3)|mirror] <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, then the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz(2|3)|mirror''': This is the type of virtual device that will be created from the pool of devices, raidz is a single disk of parity, raidz2 for 2 disks of parity and raidz3 for 3 disks of parity, similar to raid5 and raid6. Also available is '''mirror''', which is similar to raid1 or raid10, but isn't constrained to just 2 device. If not specified, each device will be added as a vdev which is similar to raid0. After creation, a device can be added to each single drive vdev to turn it into a mirror, which can be useful for migrating data.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Create pool with single raidz vdev:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
Create pool with two mirror vdevs:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
mirror \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
mirror \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced Format disks ===<br />
<br />
At pool creation, '''ashift=12''' should always be used, except with SSDs that have 8k sectors where '''ashift=13''' is correct. A vdev of 512 byte disks using 4k sectors will not experience performance issues, but a 4k disk using 512 byte sectors will. Since '''ashift''' cannot be changed after pool creation, even a pool with only 512 byte disks should use 4k because those disks may need to be replaced with 4k disks or the pool may be expanded by adding a vdev composed of 4k disks. Because correct detection of 4k disks is not reliabile, {{ic|<nowiki>-o ashift=12</nowiki>}} should always be specified during pool creation. See the [https://github.com/zfsonlinux/zfs/wiki/faq#advanced-format-disks ZFS on Linux FAQ] for more details.<br />
<br />
Create pool with ashift=12 and single raidz vdev:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
{{note|This section frequently goes out of date with updates to GRUB and ZFS. Consult the manual pages for the most up-to-date information.}}<br />
<br />
By default, ''zpool create'' enables all features on a pool. If {{ic|/boot}} resides on ZFS when using [[GRUB]] you must only enable features supported by GRUB otherwise GRUB will not be able to read the pool. GRUB 2.02 supports the read-write features {{ic|lz4_compress}}, {{ic|hole_birth}}, {{ic|embedded_data}}, {{ic|extensible_dataset}}, and {{ic|large_blocks}}; this is not suitable for all the features of ZFSonLinux 0.7.1, and must have unsupported features disabled.<br />
<br />
You can create a pool with the incompatible features disabled:<br />
<br />
# zpool create -o feature@multi_vdev_crash_dump=disabled \<br />
-o feature@large_dnode=disabled \<br />
-o feature@sha512=disabled \<br />
-o feature@skein=disabled \<br />
-o feature@edonr=disabled \<br />
$POOL_NAME $VDEVS<br />
<br />
When running the git version of ZFS on Linux, make sure to also add {{ic|1=-o feature@encryption=disabled}}.<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PCs would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. ZFS supports a few different algorithms, presently lz4 is the default. '''gzip''' is also available for seldom-written yet highly-compressable data; consult the man page for more details. Enable compression using the zfs command:<br />
# zfs set compression=on <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== SSD Caching ===<br />
<br />
You can add SSD devices as a write intent log (external ZIL or SLOG) and also as a layer 2 adaptive replacement cache (L2ARC). The process to add them is very similar to adding a new VDEV.<br />
<br />
All of the below references to device-id are the IDs from {{ic|/dev/disk/by-id/*}}.<br />
<br />
==== SLOG ====<br />
<br />
To add a mirrored SLOG:<br />
# zpool add <pool> log mirror <device-id-1> <device-id-2><br />
<br />
Or to add a single device SLOG (unsafe):<br />
# zpool add <pool> log <device-id><br />
<br />
Because the SLOG device stores data that has not been written to the pool, it is important to use devices that can finish writes when power is lost. It is also important to use redundancy, since a device failure can cause data loss. In addition, the SLOG is only used for sync writes, so may not provide any performance improvement.<br />
<br />
==== L2ARC ====<br />
<br />
To add L2ARC:<br />
# zpool add <pool> cache <device-id><br />
<br />
Because every block cached in L2ARC uses a small amount of memory, it is generally only useful in workloads where the amount of hot data is *bigger* than the maximum amount of memory that can fit in the computer, but small enough to fit into L2ARC. It is also cleared at reboot and is only a read cache, so redundancy is unnecessary. Un-intuitively, L2ARC can actually harm performance since it takes memory away from ARC.<br />
<br />
<br />
<br />
=== Database ===<br />
<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Native encryption ===<br />
Native ZFS encryption has been made available in 0.7.0.r26 or newer provided by packages like {{AUR|zfs-linux-git}}, {{AUR|zfs-dkms-git}} or other development builds. Despite the fact that version 0.7 has been released, this feature is still not enabled in the stable version as of 0.7.3, so a development build still needs to be used. An easy way of telling if encryption is available in the version of zfs you have installed is to check for the ZFS_PROP_ENCRYPTION definition in /usr/src/zfs-*/include/sys/fs/zfs.h.<br />
<br />
* Supported encryption options: {{ic|aes-128-ccm}}, {{ic|aes-192-ccm}}, {{ic|aes-256-ccm}}, {{ic|aes-128-gcm}}, {{ic|aes-192-gcm}} and {{ic|aes-256-gcm}}. When encryption is set to {{ic|on}}, {{ic|aes-256-ccm}} will be used.<br />
* Supported keyformats: {{ic|passphrase}}, {{ic|raw}}, {{ic|hex}}<br />
You can also specify iterations of PBKDF2 with {{ic|-o pbkdf2iters <n>}} (it takes time to decrypt the key)<br />
<br />
To create a dataset including native encryption with a passphrase, use:<br />
<br />
# zfs create -o encryption=on -o keyformat=passphrase <nameofzpool>/<nameofdataset><br />
<br />
To use a key instead of using a passphrase:<br />
<br />
# dd if=/dev/urandom of=/path/to/key bs=1 count=32<br />
# zfs create -o encryption=on -o keyformat=raw -o keylocation=file:///path/to/key <nameofzpool>/<nameofdataset><br />
<br />
You can also manually load the keys and then mount the encrypted dataset:<br />
# zfs load-key <nameofzpool>/<nameofdataset> # load key for a specific dataset<br />
# zfs load-key -a # load all keys<br />
# zfs load-key -r zpool/dataset # load all keys in a dataset<br />
<br />
When importing a pool that contains encrypted datasets: ZFS will by default not decrypt these datasets. To do this use {{ic|-l}}<br />
# zpool import -l pool<br />
<br />
You can automate this at boot with a custom systemd unit. For example: <br />
{{hc|/etc/systemd/system/zfs-key@.service|2=<nowiki><br />
[Unit]<br />
Description=Load storage encryption keys<br />
DefaultDependencies=no<br />
Before=systemd-user-sessions.service<br />
Before=zfs-mount.service<br />
After=zfs-import.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/bash -c 'systemd-ask-password "Encrypted storage password (%i): " | /usr/bin/zfs load-key zpool/%i'<br />
<br />
[Install]<br />
WantedBy=zfs-mount.service<br />
</nowiki>}}<br />
and enable a service instance for each encrypted volume: {{ic|# systemctl enable zfs-key@dataset}}.<br />
<br />
The Before= reference to systemd-user-sessions.service ensures that systemd-ask-password is invoked before the local IO devices are handed over to the system UI<br />
<br />
=== Scrub ===<br />
Whenever data is read and ZFS encounters an error, it is silently repaired when possible, rewritten back to disk and logged so you can obtain an overview of errors on your pools. There is no fsck or equivalent tool for ZFS. Instead, ZFS supports a feature known as scrubbing. This traverses through all the data in a pool and verifies that all blocks can be read.<br />
<br />
==== How often should I do this? ====<br />
From the Oracle blog post [https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2 Disk Scrub - Why and When?]:<br />
<br />
:This question is challenging for Support to answer, because as always the true answer is "It Depends". So before I offer a general guideline, here are a few tips to help you create an answer more tailored to your use pattern.<br />
<br />
:* What is the expiration of your oldest backup? You should probably scrub your data at least as often as your oldest tapes expire so that you have a known-good restore point.<br />
:* How often are you experiencing disk failures? While the recruitment of a hot-spare disk invokes a "resilver" -- a targeted scrub of just the VDEV which lost a disk -- you should probably scrub at least as often as you experience disk failures on average in your specific environment.<br />
:* How often is the oldest piece of data on your disk read? You should scrub occasionally to prevent very old, very stale data from experiencing bit-rot and dying without you knowing it.<br />
<br />
:If any of your answers to the above are "I don't know", I'll provide a general guideline: you should probably be scrubbing your zpool at least once per month. It's a schedule that works well for most use cases, provides enough time for scrubs to complete before starting up again on all but the busiest & most heavily-loaded systems, and even on very large zpools (192+ disks) should complete fairly often between disk failures.<br />
<br />
In the [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ ZFS Administration Guide] by Aaron Toponce, he advises to scrub consumer disks once a week.<br />
<br />
==== How do I do this? ====<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
You can cancel a running scrub with the comand:<br />
# zpool scrub -s <pool><br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Exporting a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool:<br />
<br />
# zpool export <pool><br />
<br />
=== Renaming a zpool ===<br />
<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a different mount point ===<br />
<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Access Control Lists ===<br />
To use [[ACL]] on a ZFS pool:<br />
<br />
# zfs set acltype=posixacl <nameofzpool>/<nameofdataset><br />
# zfs set xattr=sa <nameofzpool>/<nameofdataset><br />
<br />
Setting {{ic|xattr}} is recommended for performance reasons [https://github.com/zfsonlinux/zfs/issues/170#issuecomment-27348094].<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8 GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o logbias=throughput -o sync=always\<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
{{Note|zfs-auto-snapshot-git will not create snapshots during scrubbing ([[#Scrub|scrub]]). It is possible to override this by editing provided systemd unit ([[Systemd#Editing provided units]]) and removing `--skip-scrub` from `ExecStart` line. Consequences not known, someone please edit.<br />
}}<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
=== Creating a share ===<br />
ZFS has support for creating shares by SMB or [[NFS|NFS]].<br />
==== NFS ====<br />
When sharing from zfs there is no need to edit the {{ic|/etc/exports}} file. For sharing with NFS make sure to [[start]] and [[enable]] the services {{ic|nfs-server.service}} and {{ic|zfs-share.service}}.<br />
Next, to enable sharing over NFS, available to the whole network:<br />
# zfs set sharenfs=on <nameofzpool>/<nameofdataset><br />
To enable read/write access for a specific ip-range:<br />
# zfs set sharenfs="rw=@192.168.11.0/24 <nameofzpool>/<nameofdataset><br />
To check if the dataset is shared succesfully:<br />
# showmount -e `hostname`<br />
It should return something like this:<br />
Export list for hostname:<br />
/dataset 192.168.11.0/24<br />
<br />
== Troubleshooting ==<br />
=== Creating a zpool fails ===<br />
<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then [[regenerate the initramfs]] image. Which will copy the hostid into the initramfs image.<br />
<br />
=== Pool cannot be found while booting from SAS/SCSI devices ===<br />
<br />
In case you are booting a SAS/SCSI based, you might occassionally get boot problems where the pool you are trying to boot from cannot be found. A likely reason for this is that your devices are initialized too late into the process. That means that zfs cannot find any devices at the time when it tries to assemble your pool.<br />
<br />
In this case you should force the scsi driver to wait for devices to come online before continuing. You can do this by putting this into {{ic|/etc/modprobe.d/zfs.conf}}:<br />
<br />
{{hc|1=/etc/modprobe.d/zfs.conf|2=<br />
options scsi_mod scan=sync<br />
}}<br />
<br />
Afterwards, [[regenerate the initramfs]].<br />
<br />
This works because the zfs hook will copy the file at {{ic|/etc/modprobe.d/zfs.conf}} into the initcpio which will then be used at build time.<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then [[regenerate the initramfs]] in normally booted system.<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0x0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [https://github.com/zfsonlinux/zfs/wiki/faq#performance-considerations 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
=== Pool resilvering stuck/restarting/slow? ===<br />
According to the ZFSonLinux github it's a known issue since 2012 with ZFS-ZED which causes the resilvering process to constantly restart, sometimes get stuck and be generally slow for some hardware. The simplest mitigation is to stop zfs-zed.service until the resilver completes<br />
<br />
=== Fix slow boot caused by failed import of unavailable pools in the initramfs zpool.cache ===<br />
<br />
Your boot time can be significantly impacted if you update your intitramfs (eg when doing a kernel update) when you have additional but non-permanently attached pools imported because these pools will get added to your initramfs zpool.cache and ZFS will attempt to import these extra pools on every boot, regardless of whether you have exported it and removed it from your regular zpool.cache.<br />
<br />
If you notice ZFS trying to import unavailable pools at boot, first run:<br />
<br />
$ zdb -C<br />
<br />
To check you zpool.cache for pools you don't want imported at boot. If this command is showing (a) additional, currently unavailable pool(s), run:<br />
<br />
# zpool set cachefile=/etc/zfs/zpool.cache zroot<br />
<br />
To clear the zpool.cache of any pools other than the pool named zroot. Sometimes there is no need to refresh your zpool.cache, but instead all you need to do is rebuild your initramfs:<br />
<br />
# mkinitcpio -p linux<br />
<br />
Or '''linux-lts''', dependent upon the kernel variant you are running.<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[archzfs]<br />
Server = http://archzfs.com/$repo/x86_64<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-linux}} group to the list of packages to be installed (the {{ic|archzfs}} repository provides packages for the x86_64 architecture only).<br />
<br />
{{hc|~/archlive/packages.x86_64|<br />
...<br />
archzfs-linux<br />
}}<br />
<br />
Complete [[Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
{{Note|If you later have problems running modprobe zfs, you should include the linux-headers in the packages.x86_64. }}<br />
<br />
=== Encryption in ZFS using dm-crypt ===<br />
The stable release version of ZFS on Linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text abstraction, it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there. Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinitcpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible. To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
<br />
# zfs create -o compression=off -o dedup=off -o mountpoint=/home/<username> <zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#archzfs|archzfs]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-archiso-linux'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux but using the matching kernel modules directory name under the chroot's /lib/modules)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Bind mount ===<br />
Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.<br />
<br />
==== fstab ====<br />
See [http://www.freedesktop.org/software/systemd/man/systemd.mount.html systemd.mount] for more information on how systemd converts fstab into mount unit files with [http://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html systemd-fstab-generator].<br />
<br />
{{hc|/etc/fstab|<nowiki><br />
/mnt/zfspool /srv/nfs4/music none bind,defaults,nofail,x-systemd.requires=zfs-mount.service 0 0<br />
</nowiki>}}<br />
<br />
=== Monitoring / Mailing on Events ===<br />
See [https://ramsdenj.com/2016/08/29/arch-linux-on-zfs-part-3-followup.html ZED: The ZFS Event Daemon] for more information.<br />
<br />
An email forwarder, such as [[S-nail]] (installed as part of {{Grp|base}}), is required to accomplish this. Test it to be sure it is working correctly.<br />
<br />
Uncomment the following in the configuration file:<br />
<br />
{{hc|/etc/zfs/zed.d/zed.rc|<nowiki><br />
ZED_EMAIL_ADDR="root"<br />
ZED_EMAIL_PROG="mailx"<br />
ZED_NOTIFY_VERBOSE=0<br />
ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"<br />
</nowiki>}}<br />
<br />
Update 'root' in {{ic|1=ZED_EMAIL_ADDR="root"}} to the email address you want to receive notifications at.<br />
<br />
If you're keeping your mailrc in your home directory, you can tell mail to get it from there by setting {{ic|MAILRC}}:<br />
<br />
{{hc|/etc/zfs/zed.d/zed.rc|<nowiki><br />
export MAILRC=/home/<user>/.mailrc<br />
</nowiki>}}<br />
<br />
This works because ZED sources this file, so {{ic|mailx}} sees this environment variable.<br />
<br />
If you want to receive an email no matter the state of your pool, you will want to set {{ic|1=ZED_NOTIFY_VERBOSE=1}}. You will need to do this temporary to test.<br />
<br />
Start and enable {{ic|zfs-zed.service}}.<br />
<br />
With {{ic|1=ZED_NOTIFY_VERBOSE=1}}, you can test by running a scrub: {{ic|1=sudo zpool scrub <pool-name>}}.<br />
<br />
===Wrap shell commands in pre & post snapshots===<br />
Since it's so cheap to make a snapshot, we can use this as a measure of security for sensitive commands such as system and package upgrades. If we make a snapshot before, and one after, we can later diff these snapshots to find out what changed on the filesystem after the command executed. Furthermore we can also rollback in case the outcome was not desired.<br />
<br />
E.g.:<br />
<br />
# zfs snapshot -r zroot@pre<br />
# pacman -Syyu # dangerous command<br />
# zfs snapshot -r zroot@post<br />
# zfs diff zroot@pre zroot@post <br />
# zfs rollback zroot@pre<br />
<br />
<br />
A utility that automates the creation of pre and post snapshots around a shell command is [https://gist.github.com/erikw/eeec35be33e847c211acd886ffb145d5 znp].<br />
<br />
E.g.:<br />
<br />
# znp pacman -Syyu<br />
# znp find / -name "something*" -delete<br />
<br />
and you would get snapshots created before and after the supplied command, and also output of the commands logged to file for future reference so we know what command created the diff seen in a pair of pre/post snapshots.<br />
<br />
== See also ==<br />
<br />
* [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ Aaron Toponce's 17-part blog on ZFS]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [https://github.com/zfsonlinux/zfs/wiki/faq ZFS on Linux FAQ]<br />
* [https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/zfs.html FreeBSD Handbook -- The Z File System]<br />
* [https://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]{{Dead link|2017|05|30}}<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ How Pingdom uses ZFS to back up 5TB of MySQL data every day]<br />
* [https://www.linuxquestions.org/questions/linux-from-scratch-13/%5Bhow-to%5D-add-zfs-to-the-linux-kernel-4175514510/ Tutorial on adding the modules to a custom kernel]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=ZFS&diff=557064ZFS2018-11-24T18:03:06Z<p>Mouseman: /* Scrub */ removed {{Accuracy|Since when do pools have to be scrubbed at least once a week? Unsubstantiated claim.}} and corrected a typo.</p>
<hr />
<div>[[Category:File systems]]<br />
[[Category:Oracle]]<br />
[[ja:ZFS]]<br />
[[ru:ZFS]]<br />
[[zh-hans:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|ZFS/Virtual disks}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. <br />
<br />
Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum 256 Quadrillion [[Wikipedia:Zettabyte|Zettabytes]] storage with no limit on number of filesystems (datasets) or files[http://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html]. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
{{Note|Due to potential legal incompatibilities between CDDL license of ZFS code and GPL of the Linux kernel ([https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/ ],[[wikipedia:Common_Development_and_Distribution_License#GPL_compatibility|CDDL-GPL]],[[wikipedia:ZFS#Linux|ZFS in Linux]]) - ZFS development is not supported by the kernel.<br />
<br />
As a result:<br />
* ZFSonLinux project must keep up with Linux kernel versions. After making stable ZFSonLinux release - Arch ZFS maintainers release them.<br />
* This situation sometimes locks down the normal rolling update process by unsatisfied dependencies because the new kernel version, proposed by update, is unsupported by ZFSonLinux.}}<br />
<br />
== Installation ==<br />
=== General ===<br />
<br />
{{warning|Unless you use the [[dkms]] versions of these packages, the ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#archzfs|archzfs]] repository.}}<br />
<br />
{{Tip| You can [[downgrade]] your linux version to the one from [[Unofficial user repositories#archzfs|archzfs]] repo if your current kernel is newer.}}<br />
<br />
Install from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
* {{AUR|zfs-linux}} for [http://zfsonlinux.org/ stable] releases.<br />
* {{AUR|zfs-linux-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases (with support of newer kernel versions).<br />
* {{AUR|zfs-linux-lts}} for stable releases for LTS kernels.<br />
* {{AUR|zfs-linux-lts-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for LTS kernels.<br />
* {{AUR|zfs-linux-hardened}} for stable releases for hardened kernels.<br />
* {{AUR|zfs-linux-hardened-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for hardened kernels.<br />
* {{AUR|zfs-linux-zen}} for stable releases for zen kernels.<br />
* {{AUR|zfs-linux-zen-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for zen kernels.<br />
* {{AUR|zfs-dkms}} for versions with dynamic kernel module support.<br />
* {{AUR|zfs-dkms-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for versions with dynamic kernel module support.<br />
<br />
These branches have (according to them) dependencies on the {{ic|zfs-utils}}, {{ic|spl}}, {{ic|spl-utils}} packages. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-linux}} or {{AUR|zfs-dkms}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
=== DKMS ===<br />
<br />
Users can make use of DKMS [[Dynamic Kernel Module Support]] to rebuild the ZFS modules automatically with every kernel upgrade. <br />
<br />
Install {{AUR|zfs-dkms}} or {{AUR|zfs-dkms-git}} and apply the post-install instructions given by these packages.<br />
<br />
{{Tip|Add an {{ic|IgnorePkg}} entry to [[pacman.conf]] to prevent these packages from upgrading when doing a regular update.}}<br />
<br />
== Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
== Configuration ==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
=== Automatic Start ===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
{{Note|Beginning with ZOL version 0.6.5.8 the ZFS service unit files have been changed so that you need to explicitly enable any ZFS services you want to run.<br />
<br />
See [https://github.com/archzfs/archzfs/issues/72 https://github.com/archzfs/archzfs/issues/72] for more information.<br />
<br />
}}<br />
<br />
In order to mount zfs pools automatically on boot you need to enable the following services and targets:<br />
<br />
# systemctl enable zfs-import-cache<br />
# systemctl enable zfs-mount<br />
# systemctl enable zfs-import.target<br />
<br />
or, as explained on [https://github.com/archzfs/archzfs/issues/72 the GitHub issue], use the [https://www.freedesktop.org/software/systemd/man/systemd.preset.html systemd preset file]:<br />
<br />
# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)<br />
<br />
== Creating a storage pool ==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare the devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [https://github.com/zfsonlinux/zfs/wiki/faq#performance-considerations 1.10 What’s going on with performance?], [https://github.com/zfsonlinux/zfs/wiki/faq#advanced-format-disks 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the IDs of the drives to add to the zpool. The [https://github.com/zfsonlinux/zfs/wiki/faq#selecting-dev-names-when-creating-a-pool zfs on Linux developers recommend] using device IDs when creating ZFS storage pools of less than 10 devices. To find the IDs, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The IDs should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Warning|If you create zpools using device names (e.g. /dev/sda,/dev/sdb,...) ZFS might not be able to detect zpools intermittently on boot.}}<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> [raidz(2|3)|mirror] <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, then the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz(2|3)|mirror''': This is the type of virtual device that will be created from the pool of devices, raidz is a single disk of parity, raidz2 for 2 disks of parity and raidz3 for 3 disks of parity, similar to raid5 and raid6. Also available is '''mirror''', which is similar to raid1 or raid10, but isn't constrained to just 2 device. If not specified, each device will be added as a vdev which is similar to raid0. After creation, a device can be added to each single drive vdev to turn it into a mirror, which can be useful for migrating data.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Create pool with single raidz vdev:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
Create pool with two mirror vdevs:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
mirror \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
mirror \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced Format disks ===<br />
<br />
At pool creation, '''ashift=12''' should always be used, except with SSDs that have 8k sectors where '''ashift=13''' is correct. A vdev of 512 byte disks using 4k sectors will not experience performance issues, but a 4k disk using 512 byte sectors will. Since '''ashift''' cannot be changed after pool creation, even a pool with only 512 byte disks should use 4k because those disks may need to be replaced with 4k disks or the pool may be expanded by adding a vdev composed of 4k disks. Because correct detection of 4k disks is not reliabile, {{ic|<nowiki>-o ashift=12</nowiki>}} should always be specified during pool creation. See the [https://github.com/zfsonlinux/zfs/wiki/faq#advanced-format-disks ZFS on Linux FAQ] for more details.<br />
<br />
Create pool with ashift=12 and single raidz vdev:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
{{note|This section frequently goes out of date with updates to GRUB and ZFS. Consult the manual pages for the most up-to-date information.}}<br />
<br />
By default, ''zpool create'' enables all features on a pool. If {{ic|/boot}} resides on ZFS when using [[GRUB]] you must only enable features supported by GRUB otherwise GRUB will not be able to read the pool. GRUB 2.02 supports the read-write features {{ic|lz4_compress}}, {{ic|hole_birth}}, {{ic|embedded_data}}, {{ic|extensible_dataset}}, and {{ic|large_blocks}}; this is not suitable for all the features of ZFSonLinux 0.7.1, and must have unsupported features disabled.<br />
<br />
You can create a pool with the incompatible features disabled:<br />
<br />
# zpool create -o feature@multi_vdev_crash_dump=disabled \<br />
-o feature@large_dnode=disabled \<br />
-o feature@sha512=disabled \<br />
-o feature@skein=disabled \<br />
-o feature@edonr=disabled \<br />
$POOL_NAME $VDEVS<br />
<br />
When running the git version of ZFS on Linux, make sure to also add {{ic|1=-o feature@encryption=disabled}}.<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PCs would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. ZFS supports a few different algorithms, presently lz4 is the default. '''gzip''' is also available for seldom-written yet highly-compressable data; consult the man page for more details. Enable compression using the zfs command:<br />
# zfs set compression=on <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== SSD Caching ===<br />
<br />
You can add SSD devices as a write intent log (external ZIL or SLOG) and also as a layer 2 adaptive replacement cache (L2ARC). The process to add them is very similar to adding a new VDEV.<br />
<br />
All of the below references to device-id are the IDs from {{ic|/dev/disk/by-id/*}}.<br />
<br />
==== SLOG ====<br />
<br />
To add a mirrored SLOG:<br />
# zpool add <pool> log mirror <device-id-1> <device-id-2><br />
<br />
Or to add a single device SLOG (unsafe):<br />
# zpool add <pool> log <device-id><br />
<br />
Because the SLOG device stores data that has not been written to the pool, it is important to use devices that can finish writes when power is lost. It is also important to use redundancy, since a device failure can cause data loss. In addition, the SLOG is only used for sync writes, so may not provide any performance improvement.<br />
<br />
==== L2ARC ====<br />
<br />
To add L2ARC:<br />
# zpool add <pool> cache <device-id><br />
<br />
Because every block cached in L2ARC uses a small amount of memory, it is generally only useful in workloads where the amount of hot data is *bigger* than the maximum amount of memory that can fit in the computer, but small enough to fit into L2ARC. It is also cleared at reboot and is only a read cache, so redundancy is unnecessary. Un-intuitively, L2ARC can actually harm performance since it takes memory away from ARC.<br />
<br />
<br />
<br />
=== Database ===<br />
<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Native encryption ===<br />
Native ZFS encryption has been made available in 0.7.0.r26 or newer provided by packages like {{AUR|zfs-linux-git}}, {{AUR|zfs-dkms-git}} or other development builds. Despite the fact that version 0.7 has been released, this feature is still not enabled in the stable version as of 0.7.3, so a development build still needs to be used. An easy way of telling if encryption is available in the version of zfs you have installed is to check for the ZFS_PROP_ENCRYPTION definition in /usr/src/zfs-*/include/sys/fs/zfs.h.<br />
<br />
* Supported encryption options: {{ic|aes-128-ccm}}, {{ic|aes-192-ccm}}, {{ic|aes-256-ccm}}, {{ic|aes-128-gcm}}, {{ic|aes-192-gcm}} and {{ic|aes-256-gcm}}. When encryption is set to {{ic|on}}, {{ic|aes-256-ccm}} will be used.<br />
* Supported keyformats: {{ic|passphrase}}, {{ic|raw}}, {{ic|hex}}<br />
You can also specify iterations of PBKDF2 with {{ic|-o pbkdf2iters <n>}} (it takes time to decrypt the key)<br />
<br />
To create a dataset including native encryption with a passphrase, use:<br />
<br />
# zfs create -o encryption=on -o keyformat=passphrase <nameofzpool>/<nameofdataset><br />
<br />
To use a key instead of using a passphrase:<br />
<br />
# dd if=/dev/urandom of=/path/to/key bs=1 count=32<br />
# zfs create -o encryption=on -o keyformat=raw -o keylocation=file:///path/to/key <nameofzpool>/<nameofdataset><br />
<br />
You can also manually load the keys and then mount the encrypted dataset:<br />
# zfs load-key <nameofzpool>/<nameofdataset> # load key for a specific dataset<br />
# zfs load-key -a # load all keys<br />
# zfs load-key -r zpool/dataset # load all keys in a dataset<br />
<br />
When importing a pool that contains encrypted datasets: ZFS will by default not decrypt these datasets. To do this use {{ic|-l}}<br />
# zpool import -l pool<br />
<br />
You can automate this at boot with a custom systemd unit. For example: <br />
{{hc|/etc/systemd/system/zfs-key@.service|2=<nowiki><br />
[Unit]<br />
Description=Load storage encryption keys<br />
DefaultDependencies=no<br />
Before=systemd-user-sessions.service<br />
Before=zfs-mount.service<br />
After=zfs-import.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/bash -c 'systemd-ask-password "Encrypted storage password (%i): " | /usr/bin/zfs load-key zpool/%i'<br />
<br />
[Install]<br />
WantedBy=zfs-mount.service<br />
</nowiki>}}<br />
and enable a service instance for each encrypted volume: {{ic|# systemctl enable zfs-key@dataset}}.<br />
<br />
The Before= reference to systemd-user-sessions.service ensures that systemd-ask-password is invoked before the local IO devices are handed over to the system UI<br />
<br />
=== Scrub ===<br />
==== How often should I do this? ====<br />
<br />
From the Oracle blog post [https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2 Disk Scrub - Why and When?]:<br />
<br />
:This question is challenging for Support to answer, because as always the true answer is "It Depends". So before I offer a general guideline, here are a few tips to help you create an answer more tailored to your use pattern.<br />
<br />
:* What is the expiration of your oldest backup? You should probably scrub your data at least as often as your oldest tapes expire so that you have a known-good restore point.<br />
:* How often are you experiencing disk failures? While the recruitment of a hot-spare disk invokes a "resilver" -- a targeted scrub of just the VDEV which lost a disk -- you should probably scrub at least as often as you experience disk failures on average in your specific environment.<br />
:* How often is the oldest piece of data on your disk read? You should scrub occasionally to prevent very old, very stale data from experiencing bit-rot and dying without you knowing it.<br />
<br />
:If any of your answers to the above are "I don't know", I'll provide a general guideline: you should probably be scrubbing your zpool at least once per month. It's a schedule that works well for most use cases, provides enough time for scrubs to complete before starting up again on all but the busiest & most heavily-loaded systems, and even on very large zpools (192+ disks) should complete fairly often between disk failures.<br />
<br />
In the [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ ZFS Administration Guide] by Aaron Toponce, he advises to scrub consumer disks once a week.<br />
<br />
==== How do I do this? ====<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Exporting a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool:<br />
<br />
# zpool export <pool><br />
<br />
=== Renaming a zpool ===<br />
<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a different mount point ===<br />
<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Access Control Lists ===<br />
To use [[ACL]] on a ZFS pool:<br />
<br />
# zfs set acltype=posixacl <nameofzpool>/<nameofdataset><br />
# zfs set xattr=sa <nameofzpool>/<nameofdataset><br />
<br />
Setting {{ic|xattr}} is recommended for performance reasons [https://github.com/zfsonlinux/zfs/issues/170#issuecomment-27348094].<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8 GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o logbias=throughput -o sync=always\<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
{{Note|zfs-auto-snapshot-git will not create snapshots during scrubbing ([[#Scrub|scrub]]). It is possible to override this by editing provided systemd unit ([[Systemd#Editing provided units]]) and removing `--skip-scrub` from `ExecStart` line. Consequences not known, someone please edit.<br />
}}<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
=== Creating a share ===<br />
ZFS has support for creating shares by SMB or [[NFS|NFS]].<br />
==== NFS ====<br />
When sharing from zfs there is no need to edit the {{ic|/etc/exports}} file. For sharing with NFS make sure to [[start]] and [[enable]] the services {{ic|nfs-server.service}} and {{ic|zfs-share.service}}.<br />
Next, to enable sharing over NFS, available to the whole network:<br />
# zfs set sharenfs=on <nameofzpool>/<nameofdataset><br />
To enable read/write access for a specific ip-range:<br />
# zfs set sharenfs="rw=@192.168.11.0/24 <nameofzpool>/<nameofdataset><br />
To check if the dataset is shared succesfully:<br />
# showmount -e `hostname`<br />
It should return something like this:<br />
Export list for hostname:<br />
/dataset 192.168.11.0/24<br />
<br />
== Troubleshooting ==<br />
=== Creating a zpool fails ===<br />
<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then [[regenerate the initramfs]] image. Which will copy the hostid into the initramfs image.<br />
<br />
=== Pool cannot be found while booting from SAS/SCSI devices ===<br />
<br />
In case you are booting a SAS/SCSI based, you might occassionally get boot problems where the pool you are trying to boot from cannot be found. A likely reason for this is that your devices are initialized too late into the process. That means that zfs cannot find any devices at the time when it tries to assemble your pool.<br />
<br />
In this case you should force the scsi driver to wait for devices to come online before continuing. You can do this by putting this into {{ic|/etc/modprobe.d/zfs.conf}}:<br />
<br />
{{hc|1=/etc/modprobe.d/zfs.conf|2=<br />
options scsi_mod scan=sync<br />
}}<br />
<br />
Afterwards, [[regenerate the initramfs]].<br />
<br />
This works because the zfs hook will copy the file at {{ic|/etc/modprobe.d/zfs.conf}} into the initcpio which will then be used at build time.<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then [[regenerate the initramfs]] in normally booted system.<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0x0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [https://github.com/zfsonlinux/zfs/wiki/faq#performance-considerations 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
=== Pool resilvering stuck/restarting/slow? ===<br />
According to the ZFSonLinux github it's a known issue since 2012 with ZFS-ZED which causes the resilvering process to constantly restart, sometimes get stuck and be generally slow for some hardware. The simplest mitigation is to stop zfs-zed.service until the resilver completes<br />
<br />
=== Fix slow boot caused by failed import of unavailable pools in the initramfs zpool.cache ===<br />
<br />
Your boot time can be significantly impacted if you update your intitramfs (eg when doing a kernel update) when you have additional but non-permanently attached pools imported because these pools will get added to your initramfs zpool.cache and ZFS will attempt to import these extra pools on every boot, regardless of whether you have exported it and removed it from your regular zpool.cache.<br />
<br />
If you notice ZFS trying to import unavailable pools at boot, first run:<br />
<br />
$ zdb -C<br />
<br />
To check you zpool.cache for pools you don't want imported at boot. If this command is showing (a) additional, currently unavailable pool(s), run:<br />
<br />
# zpool set cachefile=/etc/zfs/zpool.cache zroot<br />
<br />
To clear the zpool.cache of any pools other than the pool named zroot. Sometimes there is no need to refresh your zpool.cache, but instead all you need to do is rebuild your initramfs:<br />
<br />
# mkinitcpio -p linux<br />
<br />
Or '''linux-lts''', dependent upon the kernel variant you are running.<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[archzfs]<br />
Server = http://archzfs.com/$repo/x86_64<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-linux}} group to the list of packages to be installed (the {{ic|archzfs}} repository provides packages for the x86_64 architecture only).<br />
<br />
{{hc|~/archlive/packages.x86_64|<br />
...<br />
archzfs-linux<br />
}}<br />
<br />
Complete [[Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
{{Note|If you later have problems running modprobe zfs, you should include the linux-headers in the packages.x86_64. }}<br />
<br />
=== Encryption in ZFS using dm-crypt ===<br />
The stable release version of ZFS on Linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text abstraction, it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there. Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinitcpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible. To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
<br />
# zfs create -o compression=off -o dedup=off -o mountpoint=/home/<username> <zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#archzfs|archzfs]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-archiso-linux'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux but using the matching kernel modules directory name under the chroot's /lib/modules)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Bind mount ===<br />
Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.<br />
<br />
==== fstab ====<br />
See [http://www.freedesktop.org/software/systemd/man/systemd.mount.html systemd.mount] for more information on how systemd converts fstab into mount unit files with [http://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html systemd-fstab-generator].<br />
<br />
{{hc|/etc/fstab|<nowiki><br />
/mnt/zfspool /srv/nfs4/music none bind,defaults,nofail,x-systemd.requires=zfs-mount.service 0 0<br />
</nowiki>}}<br />
<br />
=== Monitoring / Mailing on Events ===<br />
See [https://ramsdenj.com/2016/08/29/arch-linux-on-zfs-part-3-followup.html ZED: The ZFS Event Daemon] for more information.<br />
<br />
An email forwarder, such as [[S-nail]] (installed as part of {{Grp|base}}), is required to accomplish this. Test it to be sure it is working correctly.<br />
<br />
Uncomment the following in the configuration file:<br />
<br />
{{hc|/etc/zfs/zed.d/zed.rc|<nowiki><br />
ZED_EMAIL_ADDR="root"<br />
ZED_EMAIL_PROG="mailx"<br />
ZED_NOTIFY_VERBOSE=0<br />
ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"<br />
</nowiki>}}<br />
<br />
Update 'root' in {{ic|1=ZED_EMAIL_ADDR="root"}} to the email address you want to receive notifications at.<br />
<br />
If you're keeping your mailrc in your home directory, you can tell mail to get it from there by setting {{ic|MAILRC}}:<br />
<br />
{{hc|/etc/zfs/zed.d/zed.rc|<nowiki><br />
export MAILRC=/home/<user>/.mailrc<br />
</nowiki>}}<br />
<br />
This works because ZED sources this file, so {{ic|mailx}} sees this environment variable.<br />
<br />
If you want to receive an email no matter the state of your pool, you will want to set {{ic|1=ZED_NOTIFY_VERBOSE=1}}. You will need to do this temporary to test.<br />
<br />
Start and enable {{ic|zfs-zed.service}}.<br />
<br />
With {{ic|1=ZED_NOTIFY_VERBOSE=1}}, you can test by running a scrub: {{ic|1=sudo zpool scrub <pool-name>}}.<br />
<br />
===Wrap shell commands in pre & post snapshots===<br />
Since it's so cheap to make a snapshot, we can use this as a measure of security for sensitive commands such as system and package upgrades. If we make a snapshot before, and one after, we can later diff these snapshots to find out what changed on the filesystem after the command executed. Furthermore we can also rollback in case the outcome was not desired.<br />
<br />
E.g.:<br />
<br />
# zfs snapshot -r zroot@pre<br />
# pacman -Syyu # dangerous command<br />
# zfs snapshot -r zroot@post<br />
# zfs diff zroot@pre zroot@post <br />
# zfs rollback zroot@pre<br />
<br />
<br />
A utility that automates the creation of pre and post snapshots around a shell command is [https://gist.github.com/erikw/eeec35be33e847c211acd886ffb145d5 znp].<br />
<br />
E.g.:<br />
<br />
# znp pacman -Syyu<br />
# znp find / -name "something*" -delete<br />
<br />
and you would get snapshots created before and after the supplied command, and also output of the commands logged to file for future reference so we know what command created the diff seen in a pair of pre/post snapshots.<br />
<br />
== See also ==<br />
<br />
* [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ Aaron Toponce's 17-part blog on ZFS]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [https://github.com/zfsonlinux/zfs/wiki/faq ZFS on Linux FAQ]<br />
* [https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/zfs.html FreeBSD Handbook -- The Z File System]<br />
* [https://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]{{Dead link|2017|05|30}}<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ How Pingdom uses ZFS to back up 5TB of MySQL data every day]<br />
* [https://www.linuxquestions.org/questions/linux-from-scratch-13/%5Bhow-to%5D-add-zfs-to-the-linux-kernel-4175514510/ Tutorial on adding the modules to a custom kernel]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=ZFS&diff=556796ZFS2018-11-23T12:31:09Z<p>Mouseman: /* Scrub */ Added Oracle blog summary and references about scrubbing frequency</p>
<hr />
<div>[[Category:File systems]]<br />
[[Category:Oracle]]<br />
[[ja:ZFS]]<br />
[[ru:ZFS]]<br />
[[zh-hans:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|ZFS/Virtual disks}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. <br />
<br />
Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum 256 Quadrillion [[Wikipedia:Zettabyte|Zettabytes]] storage with no limit on number of filesystems (datasets) or files[http://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html]. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
{{Note|Due to potential legal incompatibilities between CDDL license of ZFS code and GPL of the Linux kernel ([https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/ ],[[wikipedia:Common_Development_and_Distribution_License#GPL_compatibility|CDDL-GPL]],[[wikipedia:ZFS#Linux|ZFS in Linux]]) - ZFS development is not supported by the kernel.<br />
<br />
As a result:<br />
* ZFSonLinux project must keep up with Linux kernel versions. After making stable ZFSonLinux release - Arch ZFS maintainers release them.<br />
* This situation sometimes locks down the normal rolling update process by unsatisfied dependencies because the new kernel version, proposed by update, is unsupported by ZFSonLinux.}}<br />
<br />
== Installation ==<br />
=== General ===<br />
<br />
{{warning|Unless you use the [[dkms]] versions of these packages, the ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#archzfs|archzfs]] repository.}}<br />
<br />
{{Tip| You can [[downgrade]] your linux version to the one from [[Unofficial user repositories#archzfs|archzfs]] repo if your current kernel is newer.}}<br />
<br />
Install from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
* {{AUR|zfs-linux}} for [http://zfsonlinux.org/ stable] releases.<br />
* {{AUR|zfs-linux-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases (with support of newer kernel versions).<br />
* {{AUR|zfs-linux-lts}} for stable releases for LTS kernels.<br />
* {{AUR|zfs-linux-lts-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for LTS kernels.<br />
* {{AUR|zfs-linux-hardened}} for stable releases for hardened kernels.<br />
* {{AUR|zfs-linux-hardened-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for hardened kernels.<br />
* {{AUR|zfs-linux-zen}} for stable releases for zen kernels.<br />
* {{AUR|zfs-linux-zen-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for zen kernels.<br />
* {{AUR|zfs-dkms}} for versions with dynamic kernel module support.<br />
* {{AUR|zfs-dkms-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for versions with dynamic kernel module support.<br />
<br />
These branches have (according to them) dependencies on the {{ic|zfs-utils}}, {{ic|spl}}, {{ic|spl-utils}} packages. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-linux}} or {{AUR|zfs-dkms}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
=== DKMS ===<br />
<br />
Users can make use of DKMS [[Dynamic Kernel Module Support]] to rebuild the ZFS modules automatically with every kernel upgrade. <br />
<br />
Install {{AUR|zfs-dkms}} or {{AUR|zfs-dkms-git}} and apply the post-install instructions given by these packages.<br />
<br />
{{Tip|Add an {{ic|IgnorePkg}} entry to [[pacman.conf]] to prevent these packages from upgrading when doing a regular update.}}<br />
<br />
== Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
== Configuration ==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
=== Automatic Start ===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
{{Note|Beginning with ZOL version 0.6.5.8 the ZFS service unit files have been changed so that you need to explicitly enable any ZFS services you want to run.<br />
<br />
See [https://github.com/archzfs/archzfs/issues/72 https://github.com/archzfs/archzfs/issues/72] for more information.<br />
<br />
}}<br />
<br />
In order to mount zfs pools automatically on boot you need to enable the following services and targets:<br />
<br />
# systemctl enable zfs-import-cache<br />
# systemctl enable zfs-mount<br />
# systemctl enable zfs-import.target<br />
<br />
or, as explained on [https://github.com/archzfs/archzfs/issues/72 the GitHub issue], use the [https://www.freedesktop.org/software/systemd/man/systemd.preset.html systemd preset file]:<br />
<br />
# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)<br />
<br />
== Creating a storage pool ==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare the devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [https://github.com/zfsonlinux/zfs/wiki/faq#performance-considerations 1.10 What’s going on with performance?], [https://github.com/zfsonlinux/zfs/wiki/faq#advanced-format-disks 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the IDs of the drives to add to the zpool. The [https://github.com/zfsonlinux/zfs/wiki/faq#selecting-dev-names-when-creating-a-pool zfs on Linux developers recommend] using device IDs when creating ZFS storage pools of less than 10 devices. To find the IDs, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The IDs should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Warning|If you create zpools using device names (e.g. /dev/sda,/dev/sdb,...) ZFS might not be able to detect zpools intermittently on boot.}}<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> [raidz(2|3)|mirror] <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, then the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz(2|3)|mirror''': This is the type of virtual device that will be created from the pool of devices, raidz is a single disk of parity, raidz2 for 2 disks of parity and raidz3 for 3 disks of parity, similar to raid5 and raid6. Also available is '''mirror''', which is similar to raid1 or raid10, but isn't constrained to just 2 device. If not specified, each device will be added as a vdev which is similar to raid0. After creation, a device can be added to each single drive vdev to turn it into a mirror, which can be useful for migrating data.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Create pool with single raidz vdev:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
Create pool with two mirror vdevs:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
mirror \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
mirror \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced Format disks ===<br />
<br />
At pool creation, '''ashift=12''' should always be used, except with SSDs that have 8k sectors where '''ashift=13''' is correct. A vdev of 512 byte disks using 4k sectors will not experience performance issues, but a 4k disk using 512 byte sectors will. Since '''ashift''' cannot be changed after pool creation, even a pool with only 512 byte disks should use 4k because those disks may need to be replaced with 4k disks or the pool may be expanded by adding a vdev composed of 4k disks. Because correct detection of 4k disks is not reliabile, {{ic|<nowiki>-o ashift=12</nowiki>}} should always be specified during pool creation. See the [https://github.com/zfsonlinux/zfs/wiki/faq#advanced-format-disks ZFS on Linux FAQ] for more details.<br />
<br />
Create pool with ashift=12 and single raidz vdev:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
{{note|This section frequently goes out of date with updates to GRUB and ZFS. Consult the manual pages for the most up-to-date information.}}<br />
<br />
By default, ''zpool create'' enables all features on a pool. If {{ic|/boot}} resides on ZFS when using [[GRUB]] you must only enable features supported by GRUB otherwise GRUB will not be able to read the pool. GRUB 2.02 supports the read-write features {{ic|lz4_compress}}, {{ic|hole_birth}}, {{ic|embedded_data}}, {{ic|extensible_dataset}}, and {{ic|large_blocks}}; this is not suitable for all the features of ZFSonLinux 0.7.1, and must have unsupported features disabled.<br />
<br />
You can create a pool with the incompatible features disabled:<br />
<br />
# zpool create -o feature@multi_vdev_crash_dump=disabled \<br />
-o feature@large_dnode=disabled \<br />
-o feature@sha512=disabled \<br />
-o feature@skein=disabled \<br />
-o feature@edonr=disabled \<br />
$POOL_NAME $VDEVS<br />
<br />
When running the git version of ZFS on Linux, make sure to also add {{ic|1=-o feature@encryption=disabled}}.<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PCs would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. ZFS supports a few different algorithms, presently lz4 is the default. '''gzip''' is also available for seldom-written yet highly-compressable data; consult the man page for more details. Enable compression using the zfs command:<br />
# zfs set compression=on <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== SSD Caching ===<br />
<br />
You can add SSD devices as a write intent log (external ZIL or SLOG) and also as a layer 2 adaptive replacement cache (L2ARC). The process to add them is very similar to adding a new VDEV.<br />
<br />
All of the below references to device-id are the IDs from {{ic|/dev/disk/by-id/*}}.<br />
<br />
==== SLOG ====<br />
<br />
To add a mirrored SLOG:<br />
# zpool add <pool> log mirror <device-id-1> <device-id-2><br />
<br />
Or to add a single device SLOG (unsafe):<br />
# zpool add <pool> log <device-id><br />
<br />
Because the SLOG device stores data that has not been written to the pool, it is important to use devices that can finish writes when power is lost. It is also important to use redundancy, since a device failure can cause data loss. In addition, the SLOG is only used for sync writes, so may not provide any performance improvement.<br />
<br />
==== L2ARC ====<br />
<br />
To add L2ARC:<br />
# zpool add <pool> cache <device-id><br />
<br />
Because every block cached in L2ARC uses a small amount of memory, it is generally only useful in workloads where the amount of hot data is *bigger* than the maximum amount of memory that can fit in the computer, but small enough to fit into L2ARC. It is also cleared at reboot and is only a read cache, so redundancy is unnecessary. Un-intuitively, L2ARC can actually harm performance since it takes memory away from ARC.<br />
<br />
<br />
<br />
=== Database ===<br />
<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Native encryption ===<br />
Native ZFS encryption has been made available in 0.7.0.r26 or newer provided by packages like {{AUR|zfs-linux-git}}, {{AUR|zfs-dkms-git}} or other development builds. Despite the fact that version 0.7 has been released, this feature is still not enabled in the stable version as of 0.7.3, so a development build still needs to be used. An easy way of telling if encryption is available in the version of zfs you have installed is to check for the ZFS_PROP_ENCRYPTION definition in /usr/src/zfs-*/include/sys/fs/zfs.h.<br />
<br />
* Supported encryption options: {{ic|aes-128-ccm}}, {{ic|aes-192-ccm}}, {{ic|aes-256-ccm}}, {{ic|aes-128-gcm}}, {{ic|aes-192-gcm}} and {{ic|aes-256-gcm}}. When encryption is set to {{ic|on}}, {{ic|aes-256-ccm}} will be used.<br />
* Supported keyformats: {{ic|passphrase}}, {{ic|raw}}, {{ic|hex}}<br />
You can also specify iterations of PBKDF2 with {{ic|-o pbkdf2iters <n>}} (it takes time to decrypt the key)<br />
<br />
To create a dataset including native encryption with a passphrase, use:<br />
<br />
# zfs create -o encryption=on -o keyformat=passphrase <nameofzpool>/<nameofdataset><br />
<br />
To use a key instead of using a passphrase:<br />
<br />
# dd if=/dev/urandom of=/path/to/key bs=1 count=32<br />
# zfs create -o encryption=on -o keyformat=raw -o keylocation=file:///path/to/key <nameofzpool>/<nameofdataset><br />
<br />
You can also manually load the keys and then mount the encrypted dataset:<br />
# zfs load-key <nameofzpool>/<nameofdataset> # load key for a specific dataset<br />
# zfs load-key -a # load all keys<br />
# zfs load-key -r zpool/dataset # load all keys in a dataset<br />
<br />
When importing a pool that contains encrypted datasets: ZFS will by default not decrypt these datasets. To do this use {{ic|-l}}<br />
# zpool import -l pool<br />
<br />
You can automate this at boot with a custom systemd unit. For example: <br />
{{hc|/etc/systemd/system/zfs-key@.service|2=<nowiki><br />
[Unit]<br />
Description=Load storage encryption keys<br />
DefaultDependencies=no<br />
Before=systemd-user-sessions.service<br />
Before=zfs-mount.service<br />
After=zfs-import.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/bash -c 'systemd-ask-password "Encrypted storage password (%i): " | /usr/bin/zfs load-key zpool/%i'<br />
<br />
[Install]<br />
WantedBy=zfs-mount.service<br />
</nowiki>}}<br />
and enable a service instance for each encrypted volume: {{ic|# systemctl enable zfs-key@dataset}}.<br />
<br />
The Before= reference to systemd-user-sessions.service ensures that systemd-ask-password is invoked before the local IO devices are handed over to the system UI<br />
<br />
=== Scrub ===<br />
{{Accuracy|Since when do pools have to be scrubbed at least once a week? Unsubstantiated claim.}}<br />
==== How often should I do this? ====<br />
The following has been taken from [https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2 this] Oracle blog:<br />
{{bc|This question is challenging for Support to answer, because as always the true answer is "It Depends". So before I offer a general guideline, here are a few tips to help you create an answer more tailored to your use pattern.<br />
<br />
* What is the expiration of your oldest backup? You should probably scrub your data at least as often as your oldest tapes expire so that you have a known-good restore point.<br />
* How often are you experiencing disk failures? While the recruitment of a hot-spare disk invokes a "resilver" -- a targeted scrub of just the VDEV which lost a disk -- you should probably scrub at least as often as you experience disk failures on average in your specific environment.<br />
* How often is the oldest piece of data on your disk read? You should scrub occasionally to prevent very old, very stale data from experiencing bit-rot and dying without you knowing it.<br />
<br />
If any of your answers to the above are "I don't know", I'll provide a general guideline: you should probably be scrubbing your zpool at least once per month. It's a schedule that works well for most use cases, provides enough time for scrubs to complete before starting up again on all but the busiest & most heavily-loaded systems, and even on very large zpools (192+ disks) should complete fairly often between disk failures.<br />
}}<br />
<br />
In the [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ ZFS Administration Guide] by Aaron Toponce, he advises the scrub consumer disks once a week.<br />
<br />
==== How do I do this? ====<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Exporting a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool:<br />
<br />
# zpool export <pool><br />
<br />
=== Renaming a zpool ===<br />
<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a different mount point ===<br />
<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Access Control Lists ===<br />
To use [[ACL]] on a ZFS pool:<br />
<br />
# zfs set acltype=posixacl <nameofzpool>/<nameofdataset><br />
# zfs set xattr=sa <nameofzpool>/<nameofdataset><br />
<br />
Setting {{ic|xattr}} is recommended for performance reasons [https://github.com/zfsonlinux/zfs/issues/170#issuecomment-27348094].<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8 GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o logbias=throughput -o sync=always\<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
{{Note|zfs-auto-snapshot-git will not create snapshots during scrubbing ([[#Scrub|scrub]]). It is possible to override this by editing provided systemd unit ([[Systemd#Editing provided units]]) and removing `--skip-scrub` from `ExecStart` line. Consequences not known, someone please edit.<br />
}}<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
=== Creating a share ===<br />
ZFS has support for creating shares by SMB or [[NFS|NFS]].<br />
==== NFS ====<br />
When sharing from zfs there is no need to edit the {{ic|/etc/exports}} file. For sharing with NFS make sure to [[start]] and [[enable]] the services {{ic|nfs-server.service}} and {{ic|zfs-share.service}}.<br />
Next, to enable sharing over NFS, available to the whole network:<br />
# zfs set sharenfs=on <nameofzpool>/<nameofdataset><br />
To enable read/write access for a specific ip-range:<br />
# zfs set sharenfs="rw=@192.168.11.0/24 <nameofzpool>/<nameofdataset><br />
To check if the dataset is shared succesfully:<br />
# showmount -e `hostname`<br />
It should return something like this:<br />
Export list for hostname:<br />
/dataset 192.168.11.0/24<br />
<br />
== Troubleshooting ==<br />
=== Creating a zpool fails ===<br />
<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then [[regenerate the initramfs]] image. Which will copy the hostid into the initramfs image.<br />
<br />
=== Pool cannot be found while booting from SAS/SCSI devices ===<br />
<br />
In case you are booting a SAS/SCSI based, you might occassionally get boot problems where the pool you are trying to boot from cannot be found. A likely reason for this is that your devices are initialized too late into the process. That means that zfs cannot find any devices at the time when it tries to assemble your pool.<br />
<br />
In this case you should force the scsi driver to wait for devices to come online before continuing. You can do this by putting this into {{ic|/etc/modprobe.d/zfs.conf}}:<br />
<br />
{{hc|1=/etc/modprobe.d/zfs.conf|2=<br />
options scsi_mod scan=sync<br />
}}<br />
<br />
Afterwards, [[regenerate the initramfs]].<br />
<br />
This works because the zfs hook will copy the file at {{ic|/etc/modprobe.d/zfs.conf}} into the initcpio which will then be used at build time.<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then [[regenerate the initramfs]] in normally booted system.<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0x0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [https://github.com/zfsonlinux/zfs/wiki/faq#performance-considerations 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
=== Pool resilvering stuck/restarting/slow? ===<br />
According to the ZFSonLinux github it's a known issue since 2012 with ZFS-ZED which causes the resilvering process to constantly restart, sometimes get stuck and be generally slow for some hardware. The simplest mitigation is to stop zfs-zed.service until the resilver completes<br />
<br />
=== Fix slow boot caused by failed import of unavailable pools in the initramfs zpool.cache ===<br />
<br />
Your boot time can be significantly impacted if you update your intitramfs (eg when doing a kernel update) when you have additional but non-permanently attached pools imported because these pools will get added to your initramfs zpool.cache and ZFS will attempt to import these extra pools on every boot, regardless of whether you have exported it and removed it from your regular zpool.cache.<br />
<br />
If you notice ZFS trying to import unavailable pools at boot, first run:<br />
<br />
$ zdb -C<br />
<br />
To check you zpool.cache for pools you don't want imported at boot. If this command is showing (a) additional, currently unavailable pool(s), run:<br />
<br />
# zpool set cachefile=/etc/zfs/zpool.cache zroot<br />
<br />
To clear the zpool.cache of any pools other than the pool named zroot. Sometimes there is no need to refresh your zpool.cache, but instead all you need to do is rebuild your initramfs:<br />
<br />
# mkinitcpio -p linux<br />
<br />
Or '''linux-lts''', dependent upon the kernel variant you are running.<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[archzfs]<br />
Server = http://archzfs.com/$repo/x86_64<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-linux}} group to the list of packages to be installed (the {{ic|archzfs}} repository provides packages for the x86_64 architecture only).<br />
<br />
{{hc|~/archlive/packages.x86_64|<br />
...<br />
archzfs-linux<br />
}}<br />
<br />
Complete [[Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
{{Note|If you later have problems running modprobe zfs, you should include the linux-headers in the packages.x86_64. }}<br />
<br />
=== Encryption in ZFS using dm-crypt ===<br />
The stable release version of ZFS on Linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text abstraction, it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there. Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinitcpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible. To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
<br />
# zfs create -o compression=off -o dedup=off -o mountpoint=/home/<username> <zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#archzfs|archzfs]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-archiso-linux'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux but using the matching kernel modules directory name under the chroot's /lib/modules)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Bind mount ===<br />
Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.<br />
<br />
==== fstab ====<br />
See [http://www.freedesktop.org/software/systemd/man/systemd.mount.html systemd.mount] for more information on how systemd converts fstab into mount unit files with [http://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html systemd-fstab-generator].<br />
<br />
{{hc|/etc/fstab|<nowiki><br />
/mnt/zfspool /srv/nfs4/music none bind,defaults,nofail,x-systemd.requires=zfs-mount.service 0 0<br />
</nowiki>}}<br />
<br />
=== Monitoring / Mailing on Events ===<br />
See [https://ramsdenj.com/2016/08/29/arch-linux-on-zfs-part-3-followup.html ZED: The ZFS Event Daemon] for more information.<br />
<br />
An email forwarder, such as [[S-nail]] (installed as part of {{Grp|base}}), is required to accomplish this. Test it to be sure it is working correctly.<br />
<br />
Uncomment the following in the configuration file:<br />
<br />
{{hc|/etc/zfs/zed.d/zed.rc|<nowiki><br />
ZED_EMAIL_ADDR="root"<br />
ZED_EMAIL_PROG="mailx"<br />
ZED_NOTIFY_VERBOSE=0<br />
ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"<br />
</nowiki>}}<br />
<br />
Update 'root' in {{ic|1=ZED_EMAIL_ADDR="root"}} to the email address you want to receive notifications at.<br />
<br />
If you're keeping your mailrc in your home directory, you can tell mail to get it from there by setting {{ic|MAILRC}}:<br />
<br />
{{hc|/etc/zfs/zed.d/zed.rc|<nowiki><br />
export MAILRC=/home/<user>/.mailrc<br />
</nowiki>}}<br />
<br />
This works because ZED sources this file, so {{ic|mailx}} sees this environment variable.<br />
<br />
If you want to receive an email no matter the state of your pool, you will want to set {{ic|1=ZED_NOTIFY_VERBOSE=1}}. You will need to do this temporary to test.<br />
<br />
Start and enable {{ic|zfs-zed.service}}.<br />
<br />
With {{ic|1=ZED_NOTIFY_VERBOSE=1}}, you can test by running a scrub: {{ic|1=sudo zpool scrub <pool-name>}}.<br />
<br />
===Wrap shell commands in pre & post snapshots===<br />
Since it's so cheap to make a snapshot, we can use this as a measure of security for sensitive commands such as system and package upgrades. If we make a snapshot before, and one after, we can later diff these snapshots to find out what changed on the filesystem after the command executed. Furthermore we can also rollback in case the outcome was not desired.<br />
<br />
E.g.:<br />
<br />
# zfs snapshot -r zroot@pre<br />
# pacman -Syyu # dangerous command<br />
# zfs snapshot -r zroot@post<br />
# zfs diff zroot@pre zroot@post <br />
# zfs rollback zroot@pre<br />
<br />
<br />
A utility that automates the creation of pre and post snapshots around a shell command is [https://gist.github.com/erikw/eeec35be33e847c211acd886ffb145d5 znp].<br />
<br />
E.g.:<br />
<br />
# znp pacman -Syyu<br />
# znp find / -name "something*" -delete<br />
<br />
and you would get snapshots created before and after the supplied command, and also output of the commands logged to file for future reference so we know what command created the diff seen in a pair of pre/post snapshots.<br />
<br />
== See also ==<br />
<br />
* [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ Aaron Toponce's 17-part blog on ZFS]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [https://github.com/zfsonlinux/zfs/wiki/faq ZFS on Linux FAQ]<br />
* [https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/zfs.html FreeBSD Handbook -- The Z File System]<br />
* [https://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]{{Dead link|2017|05|30}}<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ How Pingdom uses ZFS to back up 5TB of MySQL data every day]<br />
* [https://www.linuxquestions.org/questions/linux-from-scratch-13/%5Bhow-to%5D-add-zfs-to-the-linux-kernel-4175514510/ Tutorial on adding the modules to a custom kernel]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=Talk:ZFS&diff=556748Talk:ZFS2018-11-23T08:31:59Z<p>Mouseman: /* Scrub */ answered question</p>
<hr />
<div>== Bindmount ==<br />
Where does this file go and what other steps are required?<br />
<br />
I would expect: /etc/systemd/system/<br />
<br />
Then: systemctl enable srv-nfs4-media.mount<br />
<br />
[[User:Msalerno|Msalerno]] ([[User talk:Msalerno|talk]]) 02:36, 22 October 2015 (UTC)<br />
<br />
== resume hook ==<br />
In think in the page is a typo, the page should state ''resume hook'' instead of hibernate, but the limitation still applies. Can anyone confirm that the resume hook must appear before filesystems? [[User:Ezzetabi|Ezzetabi]] ([[User talk:Ezzetabi|talk]]) 09:49, 18 August 2015 (UTC)<br />
<br />
== Automatic build script ==<br />
<br />
I'm fine with deleting the scripts. I only posted it because graysky's script never worked for me. Long stuff like this would be useful if the ArchWiki featured roll-up text. [[User:Severach|Severach]] ([[User talk:Severach|talk]]) 10:07, 9 August 2015 (UTC)<br />
<br />
:I'd suggest to maintain it in a github repo. You get better versioning, syntax highlighting, cloning, etc. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 12:46, 9 August 2015 (UTC)<br />
<br />
::...or an [https://help.github.com/articles/about-gists/#anonymous-gists anonymous gist] if you don't have nor want to create a GitHub account. — [[User:Kynikos|Kynikos]] ([[User talk:Kynikos|talk]]) 08:40, 10 August 2015 (UTC)<br />
<br />
:Isn't that exactly what DKMS is doing? There DKMS packages in the AUR. [[User:Das j|Das j]] ([[User talk:Das j|talk]]) 20:01, 10 January 2016 (UTC)<br />
<br />
== Automatic snapshots ==<br />
{{AUR|zfs-auto-snapshot-git}} seems to have disappeared from the AUR. I haven't been able to find any information on why it was deleted; does anyone know? In any case, it should probably be removed from this page.<br />
[[User:Warai otoko|warai otoko]] ([[User talk:Warai otoko|talk]]) 03:21, 2 September 2015 (UTC)<br />
<br />
:On further inspection, looks like it may have gotten lost in the transition to AUR4. It should be resubmitted if we want to continue recommending it here; I've found it useful, at any rate. [[User:Warai otoko|Warai otoko]] ([[User talk:Warai otoko|talk]]) 04:43, 2 September 2015 (UTC)<br />
<br />
:: I've recreated it. I use this script as well. --[[User:Chungy|Chungy]] ([[User talk:Chungy|talk]]) 02:49, 3 September 2015 (UTC)<br />
<br />
== Configuration ==<br />
<br />
The configuration section has WAY to few infos about what systemd unit(s) to enable. Thanks to @kerberizer I finally managed to get the mounts working with the command <br />
<br />
# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)<br />
<br />
[[User:Z3ntu|Z3ntu]] ([[User talk:Z3ntu|talk]]) 15:21, 27 October 2016 (UTC)<br />
<br />
<br />
@Z3ntu I have ZFS running on a few systems and never had to enable any services, it should work by default, if not then file a bug on the package<br />
<br />
[[User:Justin8|Justin8]] ([[User talk:Justin8|talk]]) 22:04, 27 October 2016 (UTC)<br />
<br />
@Justin8 I tried it both in a virtual machine and on a physical computer that when you don't enable any services (I use "zfs-linux" from the archzfs repo), create a pool and reboot, it doesn't exist anymore (zpool status) and the pools don't get mounted without the zfs-mount service (or whatever it is called). I found a related issue on github: https://github.com/archzfs/archzfs/issues/61<br />
<br />
[[User:Z3ntu|Z3ntu]] ([[User talk:Z3ntu|talk]]) 08:34, 28 October 2016 (UTC)<br />
<br />
<br />
There seems to be a new systemd target ''zfs-import.target'' which must be enabled in order to auto-mount? Otherwise ''zfs-mount.service'' will be executed before ''zfs-import-cache.service'' on my machine and nothing will be mounted. --[[User:Swordfeng|Swordfeng]] ([[User talk:Swordfeng|talk]]) 12:55, 8 November 2017 (UTC)<br />
<br />
I think the section about systemd units should be rewritten to remove the old stale information and bring the required command-line to the fore. As mentioned on the github issue linked from the page and also repeated above by @Z3ntu. I've just been experimenting with ZFS and wasted a little time on this that could have been avoided if the page had been updated back in 2016. I haven't cahnged the page except to add the required command line there in case there is still relevance to the other text that I don't realise. I have just started using ZFS myself.<br />
[[User:starfry|starfry]] ([[User talk:starfry|talk]]) 16:07, 31 May 2018 (UTC)<br />
: I’ve set up ZFS recently and the ''systemctl enable'' commands from the Wiki page have worked fine for me so far. What do you mean by “old stale information,” and why is ''systemctl preset […]'' a “required command line?” —[[User:Auerhuhn|Auerhuhn]] ([[User talk:Auerhuhn|talk]]) 16:33, 31 May 2018 (UTC)<br />
<br />
That's why I never deleted anything from the page. I found that the ''systemctl enable'' commands worked up to the point that I rebooted. I discovered that the zpools were not imported on boot. Searching for information led me to the command-line on the github post and that did work for me. I thought I should raise its profile a little because I wasted a few hours on it. Actually I realised also I didn't enable the 3 services listed separately - just the ones at the top of the section (there are 6 services referenced by the github issue). But that probably is why I had the problem! Like I said, I have only just started with ZFS (I am testing in a VM with files rather than real devices) and it is possible that doing it in the small hours of the morning wasn't a good idea. The info on the page as it was left me asking more questions which were answered by the github issue and, in particular, that command line sequence. You don't need that command-line but you do need the systemd services that it enables (you could enable them by hand if you preferred). Maybe you don't need all six of them. But, as it was, it wasn't clear (to me).<br />
[[User:starfry|starfry]] ([[User talk:starfry|talk]]) 16:07, 31 May 2018 (UTC)<br />
<br />
== Scrub ==<br />
<br />
The advise to scrub at least once a week is completely unsubstantiated and probably incorrect in almost all situations. Advise should be acompanied by some argumentation and preferably links to support the claim.<br />
<br />
There is a good blog from Oracle about when and why (or not), to scrub:<br />
https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2<br />
<br />
I wanted to edit the page to include the most important bits about scrubbing, but figured I'd throw it up for discussion first, what do people think about this?<br />
[[User:Mouseman|Mouseman]] — ([[User talk:Mouseman|talk]]) 13:15, 21 October 2018 (UTC)<br />
<br />
: I have no strong opinion but the most pragmatic/helpful part of Oracle’s article appears to be the list of three tips near the end. I feel paraphrasing those three points in the wiki would be a good thing, together with an external link to Oracle’s article (which is pretty good) to cover the details. — [[User:Auerhuhn|Auerhuhn]] ([[User talk:Auerhuhn|talk]]) 13:51, 21 October 2018 (UTC)<br />
<br />
:: Thanks for the reply. I agree, although I was thinking to include the 'Should I do this' too. I'll let this sit here for a few days and see what else turns up and edit the page next week or weekend. — [[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 17:38, 21 October 2018 (UTC)<br />
<br />
: I was curious when I saw the factual accuracy banner. I've been reading [https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/ Aaron Toponce's guide to ZFS administration] which is an extremely thorough walkthrough. In his [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ chapter on scrubbing and resilvering] he lists two heuristics. He suggests, "[t]he recommended frequency at which you should scrub the data depends on the quality of the underlying disks. If you have SAS or FC disks, then once per month should be sufficient. If you have consumer grade SATA or SCSI, you should do once per week." That might be the source of the suggestion? I'd love to hear more from people who have more experience with ZFS. --[[User:Metatinara|Metatinara]] ([[User talk:Metatinara|talk]]) 04:43, 23 November 2018 (UTC)<br />
<br />
:: Your reply reminded me that I wanted to edit the page as discussed above. I agree that guide is very good, it has helped me greatly when I got started with ZFS. But again, I have to challenge the advise. On what basis should consumer grade harddisks be scrubbed once a week? As far as I am concerned, there is no evidence, no data to support such a claim. How likely is bitrot to occur due to degradation or solar flares? EMP? How many bits can flip before data becomes irrepairable? If we have those numbers from different vendors in different situations, we can actually make an educated guess at how often scrubs should take place. I don't know of any such data or research. I know I am only one guy with limited experience but here it is: I have been using ZFS for about 6 years in three different configurations, all consumer or prosumer hardware. Before that, I used parchive and later par2 for I don't know, 20 odd years, to create 10% parity sets on important live data and offline backups, so that I could repair corruption. I would stash away old harddisks as backups like this. In my time, I had to use par2 only once because a hard drive went bad and ran out of realocated sectors. And it wasn't even a old disk, it was still in warranty. Not once did a scrub actually have to repair something. Not once did I ever find evidence of bitrot. Doesn't mean it doesn't exist because I know it does, but based on my own experience, I think it is extremely unlikely to occur and when it does, ZFS can fix it unless it's too much; but how long does that take? So based on my own experience, I am running it once every few months and I'll likely decrease the frequency to once every 6 months or so.[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]])</div>Mousemanhttps://wiki.archlinux.org/index.php?title=Talk:ZFS&diff=549547Talk:ZFS2018-10-21T17:38:14Z<p>Mouseman: /* Scrub */ replied Auerhuhn</p>
<hr />
<div>== Bindmount ==<br />
Where does this file go and what other steps are required?<br />
<br />
I would expect: /etc/systemd/system/<br />
<br />
Then: systemctl enable srv-nfs4-media.mount<br />
<br />
[[User:Msalerno|Msalerno]] ([[User talk:Msalerno|talk]]) 02:36, 22 October 2015 (UTC)<br />
<br />
== resume hook ==<br />
In think in the page is a typo, the page should state ''resume hook'' instead of hibernate, but the limitation still applies. Can anyone confirm that the resume hook must appear before filesystems? [[User:Ezzetabi|Ezzetabi]] ([[User talk:Ezzetabi|talk]]) 09:49, 18 August 2015 (UTC)<br />
<br />
== Automatic build script ==<br />
<br />
I'm fine with deleting the scripts. I only posted it because graysky's script never worked for me. Long stuff like this would be useful if the ArchWiki featured roll-up text. [[User:Severach|Severach]] ([[User talk:Severach|talk]]) 10:07, 9 August 2015 (UTC)<br />
<br />
:I'd suggest to maintain it in a github repo. You get better versioning, syntax highlighting, cloning, etc. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 12:46, 9 August 2015 (UTC)<br />
<br />
::...or an [https://help.github.com/articles/about-gists/#anonymous-gists anonymous gist] if you don't have nor want to create a GitHub account. — [[User:Kynikos|Kynikos]] ([[User talk:Kynikos|talk]]) 08:40, 10 August 2015 (UTC)<br />
<br />
:Isn't that exactly what DKMS is doing? There DKMS packages in the AUR. [[User:Das j|Das j]] ([[User talk:Das j|talk]]) 20:01, 10 January 2016 (UTC)<br />
<br />
== Automatic snapshots ==<br />
{{AUR|zfs-auto-snapshot-git}} seems to have disappeared from the AUR. I haven't been able to find any information on why it was deleted; does anyone know? In any case, it should probably be removed from this page.<br />
[[User:Warai otoko|warai otoko]] ([[User talk:Warai otoko|talk]]) 03:21, 2 September 2015 (UTC)<br />
<br />
:On further inspection, looks like it may have gotten lost in the transition to AUR4. It should be resubmitted if we want to continue recommending it here; I've found it useful, at any rate. [[User:Warai otoko|Warai otoko]] ([[User talk:Warai otoko|talk]]) 04:43, 2 September 2015 (UTC)<br />
<br />
:: I've recreated it. I use this script as well. --[[User:Chungy|Chungy]] ([[User talk:Chungy|talk]]) 02:49, 3 September 2015 (UTC)<br />
<br />
== Configuration ==<br />
<br />
The configuration section has WAY to few infos about what systemd unit(s) to enable. Thanks to @kerberizer I finally managed to get the mounts working with the command <br />
<br />
# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)<br />
<br />
[[User:Z3ntu|Z3ntu]] ([[User talk:Z3ntu|talk]]) 15:21, 27 October 2016 (UTC)<br />
<br />
<br />
@Z3ntu I have ZFS running on a few systems and never had to enable any services, it should work by default, if not then file a bug on the package<br />
<br />
[[User:Justin8|Justin8]] ([[User talk:Justin8|talk]]) 22:04, 27 October 2016 (UTC)<br />
<br />
@Justin8 I tried it both in a virtual machine and on a physical computer that when you don't enable any services (I use "zfs-linux" from the archzfs repo), create a pool and reboot, it doesn't exist anymore (zpool status) and the pools don't get mounted without the zfs-mount service (or whatever it is called). I found a related issue on github: https://github.com/archzfs/archzfs/issues/61<br />
<br />
[[User:Z3ntu|Z3ntu]] ([[User talk:Z3ntu|talk]]) 08:34, 28 October 2016 (UTC)<br />
<br />
<br />
There seems to be a new systemd target ''zfs-import.target'' which must be enabled in order to auto-mount? Otherwise ''zfs-mount.service'' will be executed before ''zfs-import-cache.service'' on my machine and nothing will be mounted. --[[User:Swordfeng|Swordfeng]] ([[User talk:Swordfeng|talk]]) 12:55, 8 November 2017 (UTC)<br />
<br />
I think the section about systemd units should be rewritten to remove the old stale information and bring the required command-line to the fore. As mentioned on the github issue linked from the page and also repeated above by @Z3ntu. I've just been experimenting with ZFS and wasted a little time on this that could have been avoided if the page had been updated back in 2016. I haven't cahnged the page except to add the required command line there in case there is still relevance to the other text that I don't realise. I have just started using ZFS myself.<br />
[[User:starfry|starfry]] ([[User talk:starfry|talk]]) 16:07, 31 May 2018 (UTC)<br />
: I’ve set up ZFS recently and the ''systemctl enable'' commands from the Wiki page have worked fine for me so far. What do you mean by “old stale information,” and why is ''systemctl preset […]'' a “required command line?” —[[User:Auerhuhn|Auerhuhn]] ([[User talk:Auerhuhn|talk]]) 16:33, 31 May 2018 (UTC)<br />
<br />
That's why I never deleted anything from the page. I found that the ''systemctl enable'' commands worked up to the point that I rebooted. I discovered that the zpools were not imported on boot. Searching for information led me to the command-line on the github post and that did work for me. I thought I should raise its profile a little because I wasted a few hours on it. Actually I realised also I didn't enable the 3 services listed separately - just the ones at the top of the section (there are 6 services referenced by the github issue). But that probably is why I had the problem! Like I said, I have only just started with ZFS (I am testing in a VM with files rather than real devices) and it is possible that doing it in the small hours of the morning wasn't a good idea. The info on the page as it was left me asking more questions which were answered by the github issue and, in particular, that command line sequence. You don't need that command-line but you do need the systemd services that it enables (you could enable them by hand if you preferred). Maybe you don't need all six of them. But, as it was, it wasn't clear (to me).<br />
[[User:starfry|starfry]] ([[User talk:starfry|talk]]) 16:07, 31 May 2018 (UTC)<br />
<br />
== Scrub ==<br />
<br />
The advise to scrub at least once a week is completely unsubstantiated and probably incorrect in almost all situations. Advise should be acompanied by some argumentation and preferably links to support the claim.<br />
<br />
There is a good blog from Oracle about when and why (or not), to scrub:<br />
https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2<br />
<br />
I wanted to edit the page to include the most important bits about scrubbing, but figured I'd throw it up for discussion first, what do people think about this?<br />
[[User:Mouseman|Mouseman]] — ([[User talk:Mouseman|talk]]) 13:15, 21 October 2018 (UTC)<br />
<br />
: I have no strong opinion but the most pragmatic/helpful part of Oracle’s article appears to be the list of three tips near the end. I feel paraphrasing those three points in the wiki would be a good thing, together with an external link to Oracle’s article (which is pretty good) to cover the details. — [[User:Auerhuhn|Auerhuhn]] ([[User talk:Auerhuhn|talk]]) 13:51, 21 October 2018 (UTC)<br />
<br />
:: Thanks for the reply. I agree, although I was thinking to include the 'Should I do this' too. I'll let this sit here for a few days and see what else turns up and edit the page next week or weekend. — [[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 17:38, 21 October 2018 (UTC)</div>Mousemanhttps://wiki.archlinux.org/index.php?title=ZFS&diff=549468ZFS2018-10-21T13:26:49Z<p>Mouseman: /* Scrub */ added accuracy note</p>
<hr />
<div>[[Category:File systems]]<br />
[[ja:ZFS]]<br />
[[ru:ZFS]]<br />
[[zh-hans:ZFS]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related|ZFS/Virtual disks}}<br />
{{Related|Installing Arch Linux on ZFS}}<br />
{{Related articles end}}<br />
<br />
[[Wikipedia:ZFS|ZFS]] is an advanced filesystem created by [[Wikipedia:Sun Microsystems|Sun Microsystems]] (now owned by Oracle) and released for OpenSolaris in November 2005. <br />
<br />
Features of ZFS include: pooled storage (integrated volume management – zpool), [[Wikipedia:Copy-on-write|Copy-on-write]], [[Wikipedia:Snapshot (computer storage)|snapshots]], data integrity verification and automatic repair (scrubbing), [[Wikipedia:RAID-Z|RAID-Z]], a maximum [[Wikipedia:Exabyte|16 Exabyte]] file size, and a maximum 256 Quadrillion [[Wikipedia:Zettabyte|Zettabytes]] storage with no limit on number of filesystems (datasets) or files[http://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/index.html]. ZFS is licensed under the [[Wikipedia:CDDL|Common Development and Distribution License]] (CDDL).<br />
<br />
Described as [http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ "The last word in filesystems"] ZFS is stable, fast, secure, and future-proof. Being licensed under the CDDL, and thus incompatible with GPL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with [http://zfsonlinux.org/ zfsonlinux.org] (ZOL).<br />
<br />
ZOL is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.<br />
<br />
{{Note|Due to potential legal incompatibilities between CDDL license of ZFS code and GPL of the Linux kernel ([https://sfconservancy.org/blog/2016/feb/25/zfs-and-linux/ ],[[wikipedia:Common_Development_and_Distribution_License#GPL_compatibility|CDDL-GPL]],[[wikipedia:ZFS#Linux|ZFS in Linux]]) - ZFS development is not supported by the kernel.<br />
<br />
As a result:<br />
* ZFSonLinux project must keep up with Linux kernel versions. After making stable ZFSonLinux release - Arch ZFS maintainers release them.<br />
* This situation sometimes locks down the normal rolling update process by unsatisfied dependencies because the new kernel version, proposed by update, is unsupported by ZFSonLinux.}}<br />
<br />
== Installation ==<br />
=== General ===<br />
<br />
{{warning|Unless you use the [[dkms]] versions of these packages, the ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the [[Unofficial user repositories#archzfs|archzfs]] repository.}}<br />
<br />
{{Tip| You can [[downgrade]] your linux version to the one from [[Unofficial user repositories#archzfs|archzfs]] repo if your current kernel is newer.}}<br />
<br />
Install from the [[Arch User Repository]] or the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
* {{AUR|zfs-linux}} for [http://zfsonlinux.org/ stable] releases.<br />
* {{AUR|zfs-linux-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases (with support of newer kernel versions).<br />
* {{AUR|zfs-linux-lts}} for stable releases for LTS kernels.<br />
* {{AUR|zfs-linux-lts-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for LTS kernels.<br />
* {{AUR|zfs-linux-hardened}} for stable releases for hardened kernels.<br />
* {{AUR|zfs-linux-hardened-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for hardened kernels.<br />
* {{AUR|zfs-linux-zen}} for stable releases for zen kernels.<br />
* {{AUR|zfs-linux-zen-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for zen kernels.<br />
* {{AUR|zfs-dkms}} for versions with dynamic kernel module support.<br />
* {{AUR|zfs-dkms-git}} for [https://github.com/zfsonlinux/zfs/releases development] releases for versions with dynamic kernel module support.<br />
<br />
These branches have (according to them) dependencies on the {{ic|zfs-utils}}, {{ic|spl}}, {{ic|spl-utils}} packages. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.<br />
<br />
Test the installation by issuing {{ic|zpool status}} on the command line. If an "insmod" error is produced, try {{ic|depmod -a}}.<br />
<br />
=== Root on ZFS ===<br />
<br />
When performing an Arch install on ZFS, {{AUR|zfs-linux}} or {{AUR|zfs-dkms}} and its dependencies can be installed in the archiso environment as outlined in the previous section.<br />
<br />
It may be useful to prepare a [[#Embed the archzfs packages into an archiso|customized archiso]] with ZFS support builtin. For a much more detailed guide on installing Arch with ZFS as its root file system, see [[Installing Arch Linux on ZFS]].<br />
<br />
=== DKMS ===<br />
<br />
Users can make use of DKMS [[Dynamic Kernel Module Support]] to rebuild the ZFS modules automatically with every kernel upgrade. <br />
<br />
Install {{AUR|zfs-dkms}} or {{AUR|zfs-dkms-git}} and apply the post-install instructions given by these packages.<br />
<br />
{{Tip|Add an {{ic|IgnorePkg}} entry to [[pacman.conf]] to prevent these packages from upgrading when doing a regular update.}}<br />
<br />
{{Note|Pacman does not take dependencies into consideration when rebuilding DKMS modules. This will result in build failures when pacman tries to rebuild DKMS modules after a kernel upgrade. See bug report {{Bug|52901}} for details. The {{AUR|dkms-sorted}} package adds experimental support for such dependencies; technically, it is a drop-in replacement for the `dkms` package. The most convenient way to try out dkms-sorted is to install it ''before'' you install any DKMS modules.}}<br />
<br />
== Experimenting with ZFS ==<br />
<br />
Users wishing to experiment with ZFS on ''virtual block devices'' (known in ZFS terms as VDEVs) which can be simple files like {{ic|~/zfs0.img}} {{ic|~/zfs1.img}} {{ic|~/zfs2.img}} etc. with no possibility of real data loss are encouraged to see the [[Experimenting with ZFS]] article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.<br />
<br />
== Configuration ==<br />
<br />
ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: {{ic|zfs}} and {{ic|zpool}}.<br />
<br />
=== Automatic Start ===<br />
<br />
For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in {{ic|/etc/fstab}}; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file {{ic|/etc/zfs/zpool.cache}}.<br />
<br />
For each pool you want automatically mounted by the zfs daemon execute:<br />
# zpool set cachefile=/etc/zfs/zpool.cache <pool><br />
<br />
Enable the service so it is automatically started at boot time:<br />
<br />
# systemctl enable zfs.target<br />
<br />
To manually start the daemon:<br />
<br />
# systemctl start zfs.target<br />
<br />
{{Note|Beginning with ZOL version 0.6.5.8 the ZFS service unit files have been changed so that you need to explicitly enable any ZFS services you want to run.<br />
<br />
See [https://github.com/archzfs/archzfs/issues/72 https://github.com/archzfs/archzfs/issues/72] for more information.<br />
<br />
}}<br />
<br />
In order to mount zfs pools automatically on boot you need to enable the following services and targets:<br />
<br />
# systemctl enable zfs-import-cache<br />
# systemctl enable zfs-mount<br />
# systemctl enable zfs-import.target<br />
<br />
or, as explained on [https://github.com/archzfs/archzfs/issues/72 the GitHub issue], use the [https://www.freedesktop.org/software/systemd/man/systemd.preset.html systemd preset file]:<br />
<br />
# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)<br />
<br />
== Creating a storage pool ==<br />
<br />
Use {{ic| # parted --list}} to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.<br />
<br />
{{Note|If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. ([[Mdadm#Prepare the devices]]) }}<br />
<br />
{{Warning|For Advanced Format Disks with 4KB sector size, an ashift of 12 is recommended for best performance. Advanced Format disks emulate a sector size of 512 bytes for compatibility with legacy systems, this causes ZFS to sometimes use an ashift option number that is not ideal. Once the pool has been created, the only way to change the ashift option is to recreate the pool. Using an ashift of 12 would also decrease available capacity. See [https://github.com/zfsonlinux/zfs/wiki/faq#performance-considerations 1.10 What’s going on with performance?], [https://github.com/zfsonlinux/zfs/wiki/faq#advanced-format-disks 1.15 How does ZFS on Linux handle Advanced Format disks?], and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].}}<br />
<br />
Having identified the list of drives, it is now time to get the IDs of the drives to add to the zpool. The [https://github.com/zfsonlinux/zfs/wiki/faq#selecting-dev-names-when-creating-a-pool zfs on Linux developers recommend] using device IDs when creating ZFS storage pools of less than 10 devices. To find the IDs, simply:<br />
<br />
# ls -lh /dev/disk/by-id/<br />
<br />
The IDs should look similar to the following:<br />
<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd<br />
lrwxrwxrwx 1 root root 9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb<br />
<br />
{{Warning|If you create zpools using device names (e.g. /dev/sda,/dev/sdb,...) ZFS might not be able to detect zpools intermittently on boot.}}<br />
<br />
{{Style|Missing references to [[Persistent block device naming]], it is useless to explain the differences (or even what they are) here.}}<br />
<br />
Disk labels and UUID can also be used for ZFS mounts by using [[GUID Partition Table|GPT]] partitions. ZFS drives have labels but Linux is unable to read them at boot. Unlike [[Master Boot Record|MBR]] partitions, GPT partitions directly support both UUID and labels independent of the format inside the partition. Partitioning rather than using the whole disk for ZFS offers two additional advantages. The OS does not generate bogus partition numbers from whatever unpredictable data ZFS has written to the partition sector, and if desired, you can easily over provision SSD drives, and slightly over provision spindle drives to ensure that different models with slightly different sector counts can zpool replace into your mirrors. This is a lot of organization and control over ZFS using readily available tools and techniques at almost zero cost.<br />
<br />
Use [[GUID Partition Table|gdisk]] to partition the all or part of the drive as a single partition. gdisk does not automatically name partitions so if partition labels are desired use gdisk command "c" to label the partitions. Some reasons you might prefer labels over UUID are: labels are easy to control, labels can be titled to make the purpose of each disk in your arrangement readily apparent, and labels are shorter and easier to type. These are all advantages when the server is down and the heat is on. GPT partition labels have plenty of space and can store most international characters [[wikipedia:GUID_Partition_Table#Partition_entries]] allowing large data pools to be labeled in an organized fashion.<br />
<br />
Drives partitioned with GPT have labels and UUID that look like this. <br />
<br />
# ls -l /dev/disk/by-partlabel<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata1 -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 zfsdata2 -> ../../sdc1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 zfsl2arc -> ../../sda1<br />
<br />
# ls -l /dev/disk/by-partuuid<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 148c462c-7819-431a-9aba-5bf42bb5a34e -> ../../sdd1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:59 4f95da30-b2fb-412b-9090-fc349993df56 -> ../../sda1<br />
# lrwxrwxrwx 1 root root 10 Apr 30 01:44 e5ccef58-5adf-4094-81a7-3bac846a885f -> ../../sdc1<br />
<br />
Now, finally, create the ZFS pool:<br />
<br />
# zpool create -f -m <mount> <pool> [raidz(2|3)|mirror] <ids><br />
<br />
* '''create''': subcommand to create the pool.<br />
<br />
* '''-f''': Force creating the pool. This is to overcome the "EFI label error". See [[#Does not contain an EFI label]].<br />
<br />
* '''-m''': The mount point of the pool. If this is not specified, then the pool will be mounted to {{ic|/<pool>}}.<br />
<br />
* '''pool''': This is the name of the pool.<br />
<br />
* '''raidz(2|3)|mirror''': This is the type of virtual device that will be created from the pool of devices, raidz is a single disk of parity, raidz2 for 2 disks of parity and raidz3 for 3 disks of parity, similar to raid5 and raid6. Also available is '''mirror''', which is similar to raid1 or raid10, but isn't constrained to just 2 device. If not specified, each device will be added as a vdev which is similar to raid0. After creation, a device can be added to each single drive vdev to turn it into a mirror, which can be useful for migrating data.<br />
<br />
* '''ids''': The names of the drives or partitions that to include into the pool. Get it from {{ic|/dev/disk/by-id}}.<br />
<br />
Create pool with single raidz vdev:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
Create pool with two mirror vdevs:<br />
<br />
# zpool create -f -m /mnt/data bigdata \<br />
mirror \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
mirror \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Advanced Format disks ===<br />
<br />
At pool creation, '''ashift=12''' should always be used, except with SSDs that have 8k sectors where '''ashift=13''' is correct. A vdev of 512 byte disks using 4k sectors will not experience performance issues, but a 4k disk using 512 byte sectors will. Since '''ashift''' cannot be changed after pool creation, even a pool with only 512 byte disks should use 4k because those disks may need to be replaced with 4k disks or the pool may be expanded by adding a vdev composed of 4k disks. Because correct detection of 4k disks is not reliabile, {{ic|<nowiki>-o ashift=12</nowiki>}} should always be specified during pool creation. See the [https://github.com/zfsonlinux/zfs/wiki/faq#advanced-format-disks ZFS on Linux FAQ] for more details.<br />
<br />
Create pool with ashift=12 and single raidz vdev:<br />
<br />
# zpool create -f -o ashift=12 -m /mnt/data bigdata \<br />
raidz \<br />
ata-ST3000DM001-9YN166_S1F0KDGY \<br />
ata-ST3000DM001-9YN166_S1F0JKRR \<br />
ata-ST3000DM001-9YN166_S1F0KBP8 \<br />
ata-ST3000DM001-9YN166_S1F0JTM1<br />
<br />
=== Verifying pool creation ===<br />
<br />
If the command is successful, there will be no output. Using the {{ic|$ mount}} command will show that the pool is mounted. Using {{ic|# zpool status}} will show that the pool has been created.<br />
<br />
{{hc|# zpool status|<br />
pool: bigdata<br />
state: ONLINE<br />
scan: none requested<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata ONLINE 0 0 0<br />
-0 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JKRR-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8-part1 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1-part1 ONLINE 0 0 0<br />
<br />
errors: No known data errors<br />
}}<br />
<br />
At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.<br />
<br />
=== GRUB-compatible pool creation ===<br />
<br />
{{note|This section frequently goes out of date with updates to GRUB and ZFS. Consult the manual pages for the most up-to-date information.}}<br />
<br />
By default, ''zpool create'' enables all features on a pool. If {{ic|/boot}} resides on ZFS when using [[GRUB]] you must only enable features supported by GRUB otherwise GRUB will not be able to read the pool. GRUB 2.02 supports the read-write features {{ic|lz4_compress}}, {{ic|hole_birth}}, {{ic|embedded_data}}, {{ic|extensible_dataset}}, and {{ic|large_blocks}}; this is not suitable for all the features of ZFSonLinux 0.7.1, and must have unsupported features disabled.<br />
<br />
You can create a pool with the incompatible features disabled:<br />
<br />
# zpool create -o feature@multi_vdev_crash_dump=disabled \<br />
-o feature@large_dnode=disabled \<br />
-o feature@sha512=disabled \<br />
-o feature@skein=disabled \<br />
-o feature@edonr=disabled \<br />
$POOL_NAME $VDEVS<br />
<br />
When running the git version of ZFS on Linux, make sure to also add {{ic|1=-o feature@encryption=disabled}}.<br />
<br />
=== Importing a pool created by id ===<br />
<br />
Eventually a pool may fail to auto mount and you need to import to bring your pool back. Take care to avoid the most obvious solution.<br />
<br />
# ###zpool import zfsdata # Do not do this! Always use -d<br />
<br />
This will import your pools using {{ic|/dev/sd?}} which will lead to problems the next time you rearrange your drives. This may be as simple as rebooting with a USB drive left in the machine, which harkens back to a time when PCs would not boot when a floppy disk was left in a machine. Adapt one of the following commands to import your pool so that pool imports retain the persistence they were created with.<br />
<br />
# zpool import -d /dev/disk/by-id zfsdata<br />
# zpool import -d /dev/disk/by-partlabel zfsdata<br />
# zpool import -d /dev/disk/by-partuuid zfsdata<br />
<br />
== Tuning ==<br />
<br />
=== General ===<br />
<br />
Many parameters are available for zfs file systems, you can view a full list with {{ic|zfs get all <pool>}}. Two common ones to adjust are '''atime''' and '''compression'''.<br />
<br />
Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:<br />
# zfs set atime=off <pool><br />
<br />
As an alternative to turning off atime completely, '''relatime''' is available. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time has not been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property ''only'' takes effect if '''atime''' is '''on''':<br />
# zfs set relatime=on <pool><br />
<br />
Compression is just that, transparent compression of data. ZFS supports a few different algorithms, presently lz4 is the default. '''gzip''' is also available for seldom-written yet highly-compressable data; consult the man page for more details. Enable compression using the zfs command:<br />
# zfs set compression=on <pool><br />
<br />
Other options for zfs can be displayed again, using the zfs command:<br />
# zfs get all <pool><br />
<br />
=== SSD Caching ===<br />
<br />
You can add SSD devices as a write intent log (external ZIL or SLOG) and also as a layer 2 adaptive replacement cache (L2ARC). The process to add them is very similar to adding a new VDEV.<br />
<br />
All of the below references to device-id are the IDs from {{ic|/dev/disk/by-id/*}}.<br />
<br />
==== SLOG ====<br />
<br />
To add a mirrored SLOG:<br />
# zpool add <pool> log mirror <device-id-1> <device-id-2><br />
<br />
Or to add a single device SLOG (unsafe):<br />
# zpool add <pool> log <device-id><br />
<br />
Because the SLOG device stores data that has not been written to the pool, it is important to use devices that can finish writes when power is lost. It is also important to use redundancy, since a device failure can cause data loss. In addition, the SLOG is only used for sync writes, so may not provide any performance improvement.<br />
<br />
==== L2ARC ====<br />
<br />
To add L2ARC:<br />
# zpool add <pool> cache <device-id><br />
<br />
Because every block cached in L2ARC uses a small amount of memory, it is generally only useful in workloads where the amount of hot data is *bigger* than the maximum amount of memory that can fit in the computer, but small enough to fit into L2ARC. It is also cleared at reboot and is only a read cache, so redundancy is unnecessary. Un-intuitively, L2ARC can actually harm performance since it takes memory away from ARC.<br />
<br />
<br />
<br />
=== Database ===<br />
<br />
ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.<br />
<br />
Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for [[MySQL|MySQL/MariaDB]], [[PostgreSQL]], and [[Oracle]], all three of them use an 8KiB block size ''by default''. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:<br />
# zfs set recordsize=8K <pool>/postgres<br />
<br />
These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:<br />
# zfs set primarycache=metadata <pool>/postgres<br />
<br />
If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this is that ZFS will be committing data '''twice''' to the data disks, and it can severely impact performance. You can tell ZFS to prefer to not use the ZIL, and in which case, data is only committed to the file system once. Setting this for non-database file systems, or for pools with configured log devices, can actually ''negatively'' impact the performance, so beware:<br />
# zfs set logbias=throughput <pool>/postgres<br />
<br />
These can also be done at file system creation time, for example:<br />
# zfs create -o recordsize=8K \<br />
-o primarycache=metadata \<br />
-o mountpoint=/var/lib/postgres \<br />
-o logbias=throughput \<br />
<pool>/postgres<br />
<br />
Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily ''hurt'' ZFS's performance by setting these on a general-purpose file system such as your /home directory.<br />
<br />
=== /tmp ===<br />
<br />
If you would like to use ZFS to store your /tmp directory, which may be useful for storing arbitrarily-large sets of files or simply keeping your RAM free of idle data, you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with {{ic|fsync}} or {{ic|O_SYNC}}) and return immediately. While this has severe application-side data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected. Please note this does ''not'' affect the integrity of ZFS itself, only the possibility that data an application expects on-disk may not have actually been written out following a crash.<br />
# zfs set sync=disabled <pool>/tmp<br />
<br />
Additionally, for security purposes, you may want to disable '''setuid''' and '''devices''' on the /tmp file system, which prevents some kinds of privilege-escalation attacks or the use of device nodes:<br />
# zfs set setuid=off <pool>/tmp<br />
# zfs set devices=off <pool>/tmp<br />
<br />
Combining all of these for a create command would be as follows:<br />
# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp<br />
<br />
Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) [[systemd]]'s automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:<br />
# systemctl mask tmp.mount<br />
<br />
=== ZVOLs ===<br />
<br />
ZFS volumes (ZVOLs) can suffer from the same block size-related issues as RDBMSes, but it is worth noting that the default recordsize for ZVOLs is 8KiB already. If possible, it is best to align any partitions contained in a ZVOL to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the '''recordsize''' to accommodate the data inside the ZVOL as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).<br />
<br />
==== RAIDZ and Advanced Format physical disks ====<br />
<br />
Each block of a ZVOL gets its own parity disks, and if you have physical media with logical block sizes of 4096B, 8192B, or so on, the parity needs to be stored in whole physical blocks, and this can drastically increase the space requirements of a ZVOL, requiring 2× or more physical storage capacity than the ZVOL's logical capacity. Setting the '''recordsize''' to 16k or 32k can help reduce this footprint drastically.<br />
<br />
See [https://github.com/zfsonlinux/zfs/issues/1807 ZFS on Linux issue #1807 for details]<br />
<br />
== Usage ==<br />
<br />
Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:<br />
<br />
# zfs create <nameofzpool>/<nameofdataset><br />
<br />
It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:<br />
<br />
# zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory><br />
<br />
To see all the commands available in ZFS, use :<br />
<br />
$ man zfs<br />
<br />
or:<br />
<br />
$ man zpool<br />
<br />
=== Native encryption ===<br />
Native ZFS encryption has been made available in 0.7.0.r26 or newer provided by packages like {{AUR|zfs-linux-git}}, {{AUR|zfs-dkms-git}} or other development builds. Despite the fact that version 0.7 has been released, this feature is still not enabled in the stable version as of 0.7.3, so a development build still needs to be used. An easy way of telling if encryption is available in the version of zfs you have installed is to check for the ZFS_PROP_ENCRYPTION definition in /usr/src/zfs-*/include/sys/fs/zfs.h.<br />
<br />
* Supported encryption options: {{ic|aes-128-ccm}}, {{ic|aes-192-ccm}}, {{ic|aes-256-ccm}}, {{ic|aes-128-gcm}}, {{ic|aes-192-gcm}} and {{ic|aes-256-gcm}}. When encryption is set to {{ic|on}}, {{ic|aes-256-ccm}} will be used.<br />
* Supported keyformats: {{ic|passphrase}}, {{ic|raw}}, {{ic|hex}}<br />
You can also specify iterations of PBKDF2 with {{ic|-o pbkdf2iters <n>}} (it takes time to decrypt the key)<br />
<br />
To create a dataset including native encryption with a passphrase, use:<br />
<br />
# zfs create -o encryption=on -o keyformat=passphrase <nameofzpool>/<nameofdataset><br />
<br />
To use a key instead of using a passphrase:<br />
<br />
# dd if=/dev/urandom of=/path/to/key bs=1 count=32<br />
# zfs create -o encryption=on -o keyformat=raw -o keylocation=file:///path/to/key <nameofzpool>/<nameofdataset><br />
<br />
You can also manually load the keys and then mount the encrypted dataset:<br />
# zfs load-key <nameofzpool>/<nameofdataset> # load key for a specific dataset<br />
# zfs load-key -a # load all keys<br />
# zfs load-key -r zpool/dataset # load all keys in a dataset<br />
<br />
When importing a pool that contains encrypted datasets: ZFS will by default not decrypt these datasets. To do this use {{ic|-l}}<br />
# zpool import -l pool<br />
<br />
You can automate this at boot with a custom systemd unit. For example: <br />
{{hc|/etc/systemd/system/zfs-key@.service|2=<nowiki><br />
[Unit]<br />
Description=Load storage encryption keys<br />
DefaultDependencies=no<br />
Before=systemd-user-sessions.service<br />
Before=zfs-mount.service<br />
After=zfs-import.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/bash -c 'systemd-ask-password "Encrypted storage password (%i): " | /usr/bin/zfs load-key zpool/%i'<br />
<br />
[Install]<br />
WantedBy=zfs-mount.service<br />
</nowiki>}}<br />
and enable a service instance for each encrypted volume: {{ic|# systemctl enable zfs-key@dataset}}.<br />
<br />
The Before= reference to systemd-user-sessions.service ensures that systemd-ask-password is invoked before the local IO devices are handed over to the system UI<br />
<br />
=== Scrub ===<br />
{{Accuracy|Since when do pools have to be scrubbed at least once a week? Unsubstantiated claim.}}<br />
ZFS pools should be scrubbed at least once a week. To scrub the pool:<br />
<br />
# zpool scrub <pool><br />
<br />
To do automatic scrubbing once a week, set the following line in the root crontab:<br />
<br />
{{hc|# crontab -e|<br />
...<br />
30 19 * * 5 zpool scrub <pool><br />
...<br />
}}<br />
<br />
Replace {{ic|<pool>}} with the name of the ZFS pool.<br />
<br />
=== Check zfs pool status ===<br />
<br />
To print a nice table with statistics about the ZFS pool, including and read/write errors, use<br />
<br />
# zpool status -v<br />
<br />
=== Destroy a storage pool ===<br />
<br />
ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:<br />
<br />
# zpool destroy <pool><br />
<br />
And now when checking the status:<br />
<br />
{{hc|# zpool status|no pools available}}<br />
<br />
To find the name of the pool, see [[#Check zfs pool status]].<br />
<br />
=== Exporting a storage pool ===<br />
<br />
If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the {{ic|-f}} argument, but this is considered bad form.<br />
<br />
Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the {{ic|zfs_force&#61;1}} to the kernel boot parameters (which is not ideal). See [[#On boot the zfs pool does not mount stating: "pool may be in use from other system"]]<br />
<br />
To export a pool:<br />
<br />
# zpool export <pool><br />
<br />
=== Renaming a zpool ===<br />
<br />
Renaming a zpool that is already created is accomplished in 2 steps:<br />
<br />
# zpool export oldname<br />
# zpool import oldname newname<br />
<br />
=== Setting a different mount point ===<br />
<br />
The mount point for a given zpool can be moved at will with one command:<br />
# zfs set mountpoint=/foo/bar poolname<br />
<br />
=== Access Control Lists ===<br />
To use [[ACL]] on a ZFS pool:<br />
<br />
# zfs set acltype=posixacl <nameofzpool>/<nameofdataset><br />
# zfs set xattr=sa <nameofzpool>/<nameofdataset><br />
<br />
Setting {{ic|xattr}} is recommended for performance reasons [https://github.com/zfsonlinux/zfs/issues/170#issuecomment-27348094].<br />
<br />
=== Swap volume ===<br />
<br />
ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the {{ic|getconf PAGESIZE}} command (default on x86_64 is 4KiB). Another option useful for keeping the system running well in low-memory situations is not caching the ZVOL data.<br />
<br />
Create a 8 GiB zfs volume:<br />
<br />
# zfs create -V 8G -b $(getconf PAGESIZE) \<br />
-o logbias=throughput -o sync=always\<br />
-o primarycache=metadata \<br />
-o com.sun:auto-snapshot=false <pool>/swap<br />
<br />
Prepare it as swap partition:<br />
<br />
# mkswap -f /dev/zvol/<pool>/swap<br />
# swapon /dev/zvol/<pool>/swap<br />
<br />
To make it permanent, edit {{ic|/etc/fstab}}. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.<br />
<br />
Add a line to {{ic|/etc/fstab}}:<br />
<br />
/dev/zvol/<pool>/swap none swap discard 0 0<br />
<br />
{{Out of date|The hibernate hook is deprecated. Does the limitation still apply?}}<br />
Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.<br />
<br />
=== Automatic snapshots ===<br />
<br />
==== ZFS Automatic Snapshot Service for Linux ====<br />
<br />
The {{AUR|zfs-auto-snapshot-git}} package from [[AUR]] provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).<br />
<br />
To prevent a dataset from being snapshotted at all, set {{ic|1=com.sun:auto-snapshot=false}} on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set {{ic|1=com.sun:auto-snapshot:monthly=false}}.<br />
<br />
{{Note|zfs-auto-snapshot-git will not create snapshots during scrubbing ([[#Scrub|scrub]]). It is possible to override this by editing provided systemd unit ([[Systemd#Editing provided units]]) and removing `--skip-scrub` from `ExecStart` line. Consequences not known, someone please edit.<br />
}}<br />
<br />
==== ZFS Snapshot Manager ====<br />
<br />
The {{AUR|zfs-snap-manager}} package from [[AUR]] provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a [[wikipedia:Backup_rotation_scheme#Grandfather-father-son|"Grandfather-father-son"]] scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots. <br />
<br />
The package also supports configurable replication to other machines running ZFS by means of {{ic|zfs send}} and {{ic|zfs receive}}. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.<br />
<br />
== Troubleshooting ==<br />
=== Creating a zpool fails ===<br />
<br />
If the following error occurs then it can be fixed.<br />
<br />
# the kernel failed to rescan the partition table: 16<br />
# cannot label 'sdc': try using parted(8) and then provide a specific slice: -1<br />
<br />
One reason this can occur is because [https://github.com/zfsonlinux/zfs/issues/2582 ZFS expects pool creation to take less than 1 second]. This is a reasonable assumption under ordinary conditions, but in many situations it may take longer. Each drive will need to be cleared again before another attempt can be made.<br />
<br />
# parted /dev/sda rm 1<br />
# parted /dev/sda rm 1<br />
# dd if=/dev/zero of=/dev/sdb bs=512 count=1<br />
# zpool labelclear /dev/sda<br />
<br />
A brute force creation can be attempted over and over again, and with some luck the ZPool creation will take less than 1 second.<br />
Once cause for creation slowdown can be slow burst read writes on a drive. By reading from the disk in parallell to ZPool creation, it may be possible to increase burst speeds.<br />
<br />
# dd if=/dev/sda of=/dev/null<br />
<br />
This can be done with multiple drives by saving the above command for each drive to a file on separate lines and running <br />
<br />
# cat $FILE | parallel<br />
<br />
Then run ZPool creation at the same time.<br />
<br />
=== ZFS is using too much RAM ===<br />
<br />
By default, ZFS caches file operations ([[wikipedia:Adaptive replacement cache|ARC]]) using up to two-thirds of available system memory on the host. To adjust the ARC size, add the following to the [[Kernel parameters]] list:<br />
<br />
zfs.zfs_arc_max=536870912 # (for 512MB)<br />
<br />
For a more detailed description, as well as other configuration options, see [http://wiki.gentoo.org/wiki/ZFS#ARC gentoo-wiki:zfs#arc].<br />
<br />
=== Does not contain an EFI label ===<br />
<br />
The following error will occur when attempting to create a zfs filesystem,<br />
<br />
/dev/disk/by-id/<id> does not contain an EFI label but it may contain partition<br />
<br />
The way to overcome this is to use {{ic|-f}} with the zfs create command.<br />
<br />
=== No hostid found ===<br />
<br />
An error that occurs at boot with the following lines appearing before initscript output:<br />
<br />
ZFS: No hostid found on kernel command line or /etc/hostid.<br />
<br />
This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the [[kernel parameters]] in the boot loader. For example, adding {{ic|1=spl.spl_hostid=0x00bab10c}}.<br />
<br />
The other solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.<br />
<br />
# mkinitcpio -p linux<br />
<br />
=== Pool cannot be found while booting from SAS/SCSI devices ===<br />
<br />
In case you are booting a SAS/SCSI based, you might occassionally get boot problems where the pool you are trying to boot from cannot be found. A likely reason for this is that your devices are initialized too late into the process. That means that zfs cannot find any devices at the time when it tries to assemble your pool.<br />
<br />
In this case you should force the scsi driver to wait for devices to come online before continuing. You can do this by putting this into {{ic|/etc/modprobe.d/zfs.conf}}:<br />
<br />
{{hc|1=/etc/modprobe.d/zfs.conf|2=<br />
options scsi_mod scan=sync<br />
}}<br />
<br />
Afterwards, [[regenerate the initramfs]].<br />
<br />
This works because the zfs hook will copy the file at {{ic|/etc/modprobe.d/zfs.conf}} into the initcpio which will then be used at build time.<br />
<br />
=== On boot the zfs pool does not mount stating: "pool may be in use from other system" ===<br />
<br />
==== Unexported pool ====<br />
<br />
If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See [[#Emergency chroot repair with archzfs]].<br />
<br />
Once inside the chroot environment, load the ZFS module and force import the zpool,<br />
<br />
# zpool import -a -f<br />
<br />
now export the pool:<br />
<br />
# zpool export <pool><br />
<br />
To see the available pools, use,<br />
<br />
# zpool status<br />
<br />
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].<br />
<br />
If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:<br />
<br />
# mkinitcpio -p linux<br />
<br />
==== Incorrect hostid ====<br />
<br />
Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.<br />
<br />
Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.<br />
<br />
Boot using zfs_force and write down the hostid. This one is just an example.<br />
% hostid<br />
0a0af0f8<br />
<br />
This number have to be added to the [[kernel parameters]] as {{ic|spl.spl_hostid&#61;0x0a0af0f8}}. Another solution is writing the hostid inside the initram image, see the [[Installing Arch Linux on ZFS#After the first boot|installation guide]] explanation about this.<br />
<br />
Users can always ignore the check adding {{ic|zfs_force&#61;1}} in the [[kernel parameters]], but it is not advisable as a permanent solution.<br />
<br />
=== Devices have different sector alignment ===<br />
<br />
Once a drive has become faulted it should be replaced A.S.A.P. with an identical drive.<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -f<br />
<br />
but in this instance, the following error is produced:<br />
<br />
cannot replace ata-ST3000DM001-9YN166_S1F0KDGY with ata-ST3000DM001-1CH166_W1F478BD: devices have different sector alignment<br />
<br />
ZFS uses the ashift option to adjust for physical block size. When replacing the faulted disk, ZFS is attempting to use {{ic|<nowiki>ashift=12</nowiki>}}, but the faulted disk is using a different ashift (probably {{ic|<nowiki>ashift=9</nowiki>}}) and this causes the resulting error. <br />
<br />
For Advanced Format Disks with 4KB blocksize, an ashift of 12 is recommended for best performance. See [https://github.com/zfsonlinux/zfs/wiki/faq#performance-considerations 1.10 What’s going on with performance?] and [http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ZFS and Advanced Format disks].<br />
<br />
Use zdb to find the ashift of the zpool: {{ic|zdb | grep ashift}}, then use the {{ic|-o}} argument to set the ashift of the replacement drive:<br />
<br />
# zpool replace bigdata ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-1CH166_W1F478BD -o ashift=9 -f<br />
<br />
Check the zpool status for confirmation:<br />
<br />
{{hc|# zpool status -v|<br />
pool: bigdata<br />
state: DEGRADED<br />
status: One or more devices is currently being resilvered. The pool will<br />
continue to function, possibly in a degraded state.<br />
action: Wait for the resilver to complete.<br />
scan: resilver in progress since Mon Jun 16 11:16:28 2014<br />
10.3G scanned out of 5.90T at 81.7M/s, 20h59m to go<br />
2.57G resilvered, 0.17% done<br />
config:<br />
<br />
NAME STATE READ WRITE CKSUM<br />
bigdata DEGRADED 0 0 0<br />
raidz1-0 DEGRADED 0 0 0<br />
replacing-0 OFFLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KDGY OFFLINE 0 0 0<br />
ata-ST3000DM001-1CH166_W1F478BD ONLINE 0 0 0 (resilvering)<br />
ata-ST3000DM001-9YN166_S1F0JKRR ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0KBP8 ONLINE 0 0 0<br />
ata-ST3000DM001-9YN166_S1F0JTM1 ONLINE 0 0 0<br />
<br />
errors: No known data errors}}<br />
<br />
=== Pool resilvering stuck/restarting/slow? ===<br />
According to the ZFSonLinux github it's a known issue since 2012 with ZFS-ZED which causes the resilvering process to constantly restart, sometimes get stuck and be generally slow for some hardware. The simplest mitigation is to stop zfs-zed.service until the resilver completes<br />
<br />
== Tips and tricks ==<br />
<br />
===Embed the archzfs packages into an archiso===<br />
<br />
Follow the [[Archiso]] steps for creating a fully functional Arch Linux live CD/DVD/USB image.<br />
<br />
Enable the [[Unofficial user repositories#archzfs|archzfs]] repository:<br />
<br />
{{hc|~/archlive/pacman.conf|<nowiki><br />
...<br />
[archzfs]<br />
Server = http://archzfs.com/$repo/x86_64<br />
</nowiki>}}<br />
<br />
Add the {{ic|archzfs-linux}} group to the list of packages to be installed (the {{ic|archzfs}} repository provides packages for the x86_64 architecture only).<br />
<br />
{{hc|~/archlive/packages.x86_64|<br />
...<br />
archzfs-linux<br />
}}<br />
<br />
Complete [[Archiso#Build_the_ISO|Build the ISO]] to finally build the iso.<br />
<br />
{{Note|If you later have problems running modprobe zfs, you should include the linux-headers in the packages.x86_64. }}<br />
<br />
=== Encryption in ZFS using dm-crypt ===<br />
The stable release version of ZFS on Linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text abstraction, it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.<br />
<br />
dm-crypt, possibly via LUKS, creates devices in {{ic|/dev/mapper}} and their name is fixed. So you just need to change {{ic|zpool create}} commands to point to that names. The idea is configuring the system to create the {{ic|/dev/mapper}} block devices and import the zpools from there. Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection might be partially lost.<br />
<br />
For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:<br />
<br />
# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 \<br />
--key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc<br />
# zpool create zroot /dev/mapper/enc<br />
<br />
In the case of a root filesystem pool, the {{ic|mkinitcpio.conf}} HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:<br />
<br />
HOOKS="... keyboard encrypt zfs ..."<br />
<br />
Since the {{ic|/dev/mapper/enc}} name is fixed no import errors will occur.<br />
<br />
Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.<br />
<br />
ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible. To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use [[eCryptfs]] on it.<br />
<br />
For example to have an encrypted home: (the two passwords, encryption and login, must be the same)<br />
<br />
# zfs create -o compression=off -o dedup=off -o mountpoint=/home/<username> <zpool>/<username><br />
# useradd -m <username><br />
# passwd <username><br />
# ecryptfs-migrate-home -u <username><br />
<log in user and complete the procedure with ecryptfs-unwrap-passphrase><br />
<br />
=== Emergency chroot repair with archzfs ===<br />
<br />
To get into the ZFS filesystem from live system for maintenance, there are two options:<br />
<br />
# Build custom archiso with ZFS as described in [[#Embed the archzfs packages into an archiso]].<br />
# Boot the latest official archiso and bring up the network. Then enable [[Unofficial_user_repositories#archzfs|archzfs]] repository inside the live system as usual, sync the pacman package database and install the ''archzfs-archiso-linux'' package.<br />
<br />
To start the recovery, load the ZFS kernel modules:<br />
<br />
# modprobe zfs<br />
<br />
Import the pool:<br />
<br />
# zpool import -a -R /mnt<br />
<br />
Mount the boot partitions (if any):<br />
<br />
# mount /dev/sda2 /mnt/boot<br />
# mount /dev/sda1 /mnt/boot/efi<br />
<br />
Chroot into the ZFS filesystem:<br />
<br />
# arch-chroot /mnt /bin/bash<br />
<br />
Check the kernel version:<br />
<br />
# pacman -Qi linux<br />
# uname -r<br />
<br />
uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:<br />
<br />
# depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux but using the matching kernel modules directory name under the chroot's /lib/modules)<br />
<br />
This will load the correct kernel modules for the kernel version installed in the chroot installation.<br />
<br />
Regenerate the ramdisk:<br />
<br />
# mkinitcpio -p linux<br />
<br />
There should be no errors.<br />
<br />
=== Bind mount ===<br />
Here a bind mount from /mnt/zfspool to /srv/nfs4/music is created. The configuration ensures that the zfs pool is ready before the bind mount is created.<br />
<br />
==== fstab ====<br />
See [http://www.freedesktop.org/software/systemd/man/systemd.mount.html systemd.mount] for more information on how systemd converts fstab into mount unit files with [http://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html systemd-fstab-generator].<br />
<br />
{{hc|/etc/fstab|<nowiki><br />
/mnt/zfspool /srv/nfs4/music none bind,defaults,nofail,x-systemd.requires=zfs-mount.service 0 0<br />
</nowiki>}}<br />
<br />
=== Monitoring / Mailing on Events ===<br />
See [https://ramsdenj.com/2016/08/29/arch-linux-on-zfs-part-3-followup.html ZED: The ZFS Event Daemon] for more information.<br />
<br />
At the minimum, an email forwarder, such as [[msmtp]], is required to accomplish this. Make sure it is working correctly.<br />
<br />
Uncomment the following in the configuration file:<br />
<br />
{{hc|/etc/zfs/zed.d/zed.rc|<nowiki><br />
ZED_EMAIL_ADDR="root"<br />
ZED_EMAIL_PROG="mail"<br />
ZED_NOTIFY_VERBOSE=0<br />
ZED_EMAIL_OPTS="-s '@SUBJECT@' @ADDRESS@"<br />
</nowiki>}}<br />
<br />
Update 'root' in {{ic|1=ZED_EMAIL_ADDR="root"}} to the email address you want to receive notifications at.<br />
<br />
If you want to receive an email no matter the state of your pool, you will want to set {{ic|1=ZED_NOTIFY_VERBOSE=1}}.<br />
<br />
Start and enable {{ic|zfs-zed.service}}.<br />
<br />
If you set verbose to 1, you can test by running a scrub.<br />
<br />
<br />
===Wrap shell commands in pre & post snapshots===<br />
Since it's so cheap to make a snapshot, we can use this as a measure of security for sensitive commands such as system and package upgrades. If we make a snapshot before, and one after, we can later diff these snapshots to find out what changed on the filesystem after the command executed. Furthermore we can also rollback in case the outcome was not desired.<br />
<br />
E.g.:<br />
<br />
# zfs snapshot -r zroot@pre<br />
# pacman -Syyu # dangerous command<br />
# zfs snapshot -r zroot@post<br />
# zfs diff zroot@pre zroot@post <br />
# zfs rollback zroot@pre<br />
<br />
<br />
A utility that automates the creation of pre and post snapshots around a shell command is [https://gist.github.com/erikw/eeec35be33e847c211acd886ffb145d5 znp].<br />
<br />
E.g.:<br />
<br />
# znp pacman -Syyu<br />
# znp find / -name "something*" -delete<br />
<br />
and you would get snapshots created before and after the supplied command, and also output of the commands logged to file for future reference so we know what command created the diff seen in a pair of pre/post snapshots.<br />
<br />
== See also ==<br />
<br />
* [https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ Aaron Toponce's 17-part blog on ZFS]<br />
* [http://zfsonlinux.org/ ZFS on Linux]<br />
* [https://github.com/zfsonlinux/zfs/wiki/faq ZFS on Linux FAQ]<br />
* [https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/zfs.html FreeBSD Handbook -- The Z File System]<br />
* [https://docs.oracle.com/cd/E19253-01/819-5461/index.html Oracle Solaris ZFS Administration Guide]<br />
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Solaris Internals -- ZFS Troubleshooting Guide]{{Dead link|2017|05|30}}<br />
* [http://royal.pingdom.com/2013/06/04/zfs-backup/ How Pingdom uses ZFS to back up 5TB of MySQL data every day]<br />
* [https://www.linuxquestions.org/questions/linux-from-scratch-13/%5Bhow-to%5D-add-zfs-to-the-linux-kernel-4175514510/ Tutorial on adding the modules to a custom kernel]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=Talk:ZFS&diff=549467Talk:ZFS2018-10-21T13:15:48Z<p>Mouseman: /* Scrub */ new section</p>
<hr />
<div>== Bindmount ==<br />
Where does this file go and what other steps are required?<br />
<br />
I would expect: /etc/systemd/system/<br />
<br />
Then: systemctl enable srv-nfs4-media.mount<br />
<br />
[[User:Msalerno|Msalerno]] ([[User talk:Msalerno|talk]]) 02:36, 22 October 2015 (UTC)<br />
<br />
== resume hook ==<br />
In think in the page is a typo, the page should state ''resume hook'' instead of hibernate, but the limitation still applies. Can anyone confirm that the resume hook must appear before filesystems? [[User:Ezzetabi|Ezzetabi]] ([[User talk:Ezzetabi|talk]]) 09:49, 18 August 2015 (UTC)<br />
<br />
== Automatic build script ==<br />
<br />
I'm fine with deleting the scripts. I only posted it because graysky's script never worked for me. Long stuff like this would be useful if the ArchWiki featured roll-up text. [[User:Severach|Severach]] ([[User talk:Severach|talk]]) 10:07, 9 August 2015 (UTC)<br />
<br />
:I'd suggest to maintain it in a github repo. You get better versioning, syntax highlighting, cloning, etc. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 12:46, 9 August 2015 (UTC)<br />
<br />
::...or an [https://help.github.com/articles/about-gists/#anonymous-gists anonymous gist] if you don't have nor want to create a GitHub account. — [[User:Kynikos|Kynikos]] ([[User talk:Kynikos|talk]]) 08:40, 10 August 2015 (UTC)<br />
<br />
:Isn't that exactly what DKMS is doing? There DKMS packages in the AUR. [[User:Das j|Das j]] ([[User talk:Das j|talk]]) 20:01, 10 January 2016 (UTC)<br />
<br />
== Automatic snapshots ==<br />
{{AUR|zfs-auto-snapshot-git}} seems to have disappeared from the AUR. I haven't been able to find any information on why it was deleted; does anyone know? In any case, it should probably be removed from this page.<br />
[[User:Warai otoko|warai otoko]] ([[User talk:Warai otoko|talk]]) 03:21, 2 September 2015 (UTC)<br />
<br />
:On further inspection, looks like it may have gotten lost in the transition to AUR4. It should be resubmitted if we want to continue recommending it here; I've found it useful, at any rate. [[User:Warai otoko|Warai otoko]] ([[User talk:Warai otoko|talk]]) 04:43, 2 September 2015 (UTC)<br />
<br />
:: I've recreated it. I use this script as well. --[[User:Chungy|Chungy]] ([[User talk:Chungy|talk]]) 02:49, 3 September 2015 (UTC)<br />
<br />
== Configuration ==<br />
<br />
The configuration section has WAY to few infos about what systemd unit(s) to enable. Thanks to @kerberizer I finally managed to get the mounts working with the command <br />
<br />
# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)<br />
<br />
[[User:Z3ntu|Z3ntu]] ([[User talk:Z3ntu|talk]]) 15:21, 27 October 2016 (UTC)<br />
<br />
<br />
@Z3ntu I have ZFS running on a few systems and never had to enable any services, it should work by default, if not then file a bug on the package<br />
<br />
[[User:Justin8|Justin8]] ([[User talk:Justin8|talk]]) 22:04, 27 October 2016 (UTC)<br />
<br />
@Justin8 I tried it both in a virtual machine and on a physical computer that when you don't enable any services (I use "zfs-linux" from the archzfs repo), create a pool and reboot, it doesn't exist anymore (zpool status) and the pools don't get mounted without the zfs-mount service (or whatever it is called). I found a related issue on github: https://github.com/archzfs/archzfs/issues/61<br />
<br />
[[User:Z3ntu|Z3ntu]] ([[User talk:Z3ntu|talk]]) 08:34, 28 October 2016 (UTC)<br />
<br />
<br />
There seems to be a new systemd target ''zfs-import.target'' which must be enabled in order to auto-mount? Otherwise ''zfs-mount.service'' will be executed before ''zfs-import-cache.service'' on my machine and nothing will be mounted. --[[User:Swordfeng|Swordfeng]] ([[User talk:Swordfeng|talk]]) 12:55, 8 November 2017 (UTC)<br />
<br />
I think the section about systemd units should be rewritten to remove the old stale information and bring the required command-line to the fore. As mentioned on the github issue linked from the page and also repeated above by @Z3ntu. I've just been experimenting with ZFS and wasted a little time on this that could have been avoided if the page had been updated back in 2016. I haven't cahnged the page except to add the required command line there in case there is still relevance to the other text that I don't realise. I have just started using ZFS myself.<br />
[[User:starfry|starfry]] ([[User talk:starfry|talk]]) 16:07, 31 May 2018 (UTC)<br />
: I’ve set up ZFS recently and the ''systemctl enable'' commands from the Wiki page have worked fine for me so far. What do you mean by “old stale information,” and why is ''systemctl preset […]'' a “required command line?” —[[User:Auerhuhn|Auerhuhn]] ([[User talk:Auerhuhn|talk]]) 16:33, 31 May 2018 (UTC)<br />
<br />
That's why I never deleted anything from the page. I found that the ''systemctl enable'' commands worked up to the point that I rebooted. I discovered that the zpools were not imported on boot. Searching for information led me to the command-line on the github post and that did work for me. I thought I should raise its profile a little because I wasted a few hours on it. Actually I realised also I didn't enable the 3 services listed separately - just the ones at the top of the section (there are 6 services referenced by the github issue). But that probably is why I had the problem! Like I said, I have only just started with ZFS (I am testing in a VM with files rather than real devices) and it is possible that doing it in the small hours of the morning wasn't a good idea. The info on the page as it was left me asking more questions which were answered by the github issue and, in particular, that command line sequence. You don't need that command-line but you do need the systemd services that it enables (you could enable them by hand if you preferred). Maybe you don't need all six of them. But, as it was, it wasn't clear (to me).<br />
[[User:starfry|starfry]] ([[User talk:starfry|talk]]) 16:07, 31 May 2018 (UTC)<br />
<br />
== Scrub ==<br />
<br />
The advise to scrub at least once a week is completely unsubstantiated and probably incorrect in almost all situations. Advise should be acompanied by some argumentation and preferably links to support the claim.<br />
<br />
There is a good blog from Oracle about when and why (or not), to scrub:<br />
https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2<br />
<br />
I wanted to edit the page to include the most important bits about scrubbing, but figured I'd throw it up for discussion first, what do people think about this?<br />
[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 13:15, 21 October 2018 (UTC)</div>Mousemanhttps://wiki.archlinux.org/index.php?title=ArchWiki_talk:Sandbox&diff=549465ArchWiki talk:Sandbox2018-10-21T13:07:03Z<p>Mouseman: /* New topic to talk about */ forgot to sign</p>
<hr />
<div>==Comments==<br />
<br />
Hello, how are you? -- [[User:Acgtyrant|Acgtyrant]] ([[User talk:Acgtyrant|talk]]) 15:17, 27 August 2013 (UTC)<br />
:Fine, thanks, and you? -- [[User:Acgtyrant|Acgtyrant]] ([[User talk:Acgtyrant|talk]]) 15:17, 27 August 2013 (UTC)<br />
::Tres bien) -- [[User:Kycok|Kycok]] ([[User talk:Kycok|talk]]) 05:33, 28 January 2014 (UTC)<br />
:: how do you edit the wiki?<br />
Testing [[User:Tech2077|Tech2077]] ([[User talk:Tech2077|talk]]) 21:38, 3 July 2015 (UTC)<br />
:::: Uh, tu parles francais [[User:Kycok|Kycok]]? I am from switzerland but I hated french in school and now I am learning it every single day. --[[User:Ndalliard|ndalliard]] ([[User talk:Ndalliard|talk]]) 04:42, 31 July 2015 (UTC)<br />
Trying to contribute here. ([[User:Amoros|Amoros]]) ([[User talk:Amoros|talk]]) 15:37, 12 August 2015 (UTC)<br />
<br />
Hi I'm new here [[User:Chrisfryer78|Chrisfryer78]] ([[User talk:Chrisfryer78|talk]]) 08:36, 7 November 2015 (UTC)<br />
:Hello, I'm new here too! This is a test. [[User:Nullifer|Nullifer]] ([[User talk:Nullifer|talk]]) 07:48, 30 December 2016 (UTC)<br />
:Hello this is a test reply :) [[User:Viktorstrate|Viktorstrate]] ([[User talk:Viktorstrate|talk]])viktorstrate 16:10, 4 July 2018 (UTC)<br />
<br />
== A new section should be added ==<br />
<br />
I'd like to propose to change blahdieblah!<br />
<br />
[[User:E-type|E-type]] ([[User talk:E-type|talk]]) 17:38, 9 October 2016 (UTC)<br />
<br />
<br />
adding my contributions to the new section, test edit blah blah ...<br />
<br />
[[User:Fawix|Fawix]] ([[User talk:Fawix|talk]]) 20:14, 1 January 2017 (UTC)<br />
<br />
Testing stuff in the sandbox [[User:RobU3|RobU3]] ([[User talk:RobU3|talk]]) 05:15, 30 January 2018 (UTC)<br />
<br />
== New test section ==<br />
<br />
Hello Sandbox! <br />
<br />
-- [[User:Raczek|Raczek]] ([[User talk:Raczek|talk]]) 13:43, 11 May 2017 (UTC)<br />
<br />
Just a test...<br />
[[User:Leventel|Leventel]] ([[User talk:Leventel|talk]]) 10:07, 12 May 2017 (UTC)<br />
<br />
Looks like I'm the newest user now. [[User:Mycatfishsteve|Mycatfishsteve]] ([[User talk:Mycatfishsteve|talk]]) 21:56, 6 June 2017 (UTC)<br />
: Short joy ! I myself wonder how to add {{ic|codelines}} [[User:Lafleur|la Fleur]] ([[User talk:Lafleur|talk]]) 23:55, 12 October 2018 (UTC)<br />
<br />
== Add topic test ==<br />
<br />
test<br />
<br />
-- [[User:Z32O|Z32O]] ([[User talk:Z32O|talk]]) 17:14, 13 Oct 2017 (UTC)<br />
<br />
== New topic to talk about ==<br />
<br />
Bla bla bla bla bla.<br />
[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 13:07, 21 October 2018 (UTC)</div>Mousemanhttps://wiki.archlinux.org/index.php?title=ArchWiki_talk:Sandbox&diff=549464ArchWiki talk:Sandbox2018-10-21T13:06:22Z<p>Mouseman: /* New topic to talk about */ new section</p>
<hr />
<div>==Comments==<br />
<br />
Hello, how are you? -- [[User:Acgtyrant|Acgtyrant]] ([[User talk:Acgtyrant|talk]]) 15:17, 27 August 2013 (UTC)<br />
:Fine, thanks, and you? -- [[User:Acgtyrant|Acgtyrant]] ([[User talk:Acgtyrant|talk]]) 15:17, 27 August 2013 (UTC)<br />
::Tres bien) -- [[User:Kycok|Kycok]] ([[User talk:Kycok|talk]]) 05:33, 28 January 2014 (UTC)<br />
:: how do you edit the wiki?<br />
Testing [[User:Tech2077|Tech2077]] ([[User talk:Tech2077|talk]]) 21:38, 3 July 2015 (UTC)<br />
:::: Uh, tu parles francais [[User:Kycok|Kycok]]? I am from switzerland but I hated french in school and now I am learning it every single day. --[[User:Ndalliard|ndalliard]] ([[User talk:Ndalliard|talk]]) 04:42, 31 July 2015 (UTC)<br />
Trying to contribute here. ([[User:Amoros|Amoros]]) ([[User talk:Amoros|talk]]) 15:37, 12 August 2015 (UTC)<br />
<br />
Hi I'm new here [[User:Chrisfryer78|Chrisfryer78]] ([[User talk:Chrisfryer78|talk]]) 08:36, 7 November 2015 (UTC)<br />
:Hello, I'm new here too! This is a test. [[User:Nullifer|Nullifer]] ([[User talk:Nullifer|talk]]) 07:48, 30 December 2016 (UTC)<br />
:Hello this is a test reply :) [[User:Viktorstrate|Viktorstrate]] ([[User talk:Viktorstrate|talk]])viktorstrate 16:10, 4 July 2018 (UTC)<br />
<br />
== A new section should be added ==<br />
<br />
I'd like to propose to change blahdieblah!<br />
<br />
[[User:E-type|E-type]] ([[User talk:E-type|talk]]) 17:38, 9 October 2016 (UTC)<br />
<br />
<br />
adding my contributions to the new section, test edit blah blah ...<br />
<br />
[[User:Fawix|Fawix]] ([[User talk:Fawix|talk]]) 20:14, 1 January 2017 (UTC)<br />
<br />
Testing stuff in the sandbox [[User:RobU3|RobU3]] ([[User talk:RobU3|talk]]) 05:15, 30 January 2018 (UTC)<br />
<br />
== New test section ==<br />
<br />
Hello Sandbox! <br />
<br />
-- [[User:Raczek|Raczek]] ([[User talk:Raczek|talk]]) 13:43, 11 May 2017 (UTC)<br />
<br />
Just a test...<br />
[[User:Leventel|Leventel]] ([[User talk:Leventel|talk]]) 10:07, 12 May 2017 (UTC)<br />
<br />
Looks like I'm the newest user now. [[User:Mycatfishsteve|Mycatfishsteve]] ([[User talk:Mycatfishsteve|talk]]) 21:56, 6 June 2017 (UTC)<br />
: Short joy ! I myself wonder how to add {{ic|codelines}} [[User:Lafleur|la Fleur]] ([[User talk:Lafleur|talk]]) 23:55, 12 October 2018 (UTC)<br />
<br />
== Add topic test ==<br />
<br />
test<br />
<br />
-- [[User:Z32O|Z32O]] ([[User talk:Z32O|talk]]) 17:14, 13 Oct 2017 (UTC)<br />
<br />
== New topic to talk about ==<br />
<br />
Bla bla bla bla bla.</div>Mousemanhttps://wiki.archlinux.org/index.php?title=QEMU&diff=492109QEMU2017-10-02T05:44:33Z<p>Mouseman: /* Troubleshooting */ added an issue due to missing dependency with links to more info.</p>
<hr />
<div>[[Category:Emulators]]<br />
[[Category:Hypervisors]]<br />
[[de:Qemu]]<br />
[[es:QEMU]]<br />
[[fr:Qemu]]<br />
[[ja:QEMU]]<br />
[[ru:QEMU]]<br />
[[zh-hans:QEMU]]<br />
[[zh-hant:QEMU]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|Libvirt}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
According to the [http://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."<br />
<br />
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.<br />
<br />
QEMU can use other hypervisors like [[Xen]] or [[KVM]] to use CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|qemu}} package (or {{Pkg|qemu-headless}} for the version without GUI) and below optional packages for your needs:<br />
<br />
* {{Pkg|qemu-arch-extra}} - extra architectures support<br />
* {{Pkg|qemu-block-gluster}} - [[Glusterfs]] block support<br />
* {{Pkg|qemu-block-iscsi}} - [[iSCSI]] block support<br />
* {{Pkg|qemu-block-rbd}} - RBD block support <br />
* {{Pkg|samba}} - [[Samba|SMB/CIFS]] server support<br />
<br />
== Graphical front-ends for QEMU ==<br />
<br />
Unlike other virtualization programs such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s). However, there are several GUI front-ends for QEMU:<br />
<br />
* {{Pkg|virt-manager}}<br />
* {{Pkg|gnome-boxes}}<br />
* {{Pkg|qemu-launcher}}<br />
* {{Pkg|qtemu}}<br />
* {{AUR|aqemu}}<br />
<br />
Additional front-ends with QEMU support are available for [[libvirt]].<br />
<br />
== Creating new virtualized system ==<br />
<br />
=== Creating a hard disk image ===<br />
<br />
{{Tip|See the [https://en.wikibooks.org/wiki/QEMU/Images QEMU Wikibook] for more information on QEMU images.}}<br />
<br />
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.<br />
<br />
A hard disk image can be ''raw'', so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.<br />
<br />
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see [[#Creating and managing snapshots via the monitor console]] for details). However, using this format instead of ''raw'' will likely affect performance.<br />
<br />
QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GB image in the ''raw'' format:<br />
<br />
$ qemu-img create -f raw ''image_file'' 4G<br />
<br />
You may use {{ic|-f qcow2}} to create a ''qcow2'' disk instead.<br />
<br />
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}<br />
<br />
{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any images.}}<br />
<br />
==== Overlay storage images ====<br />
<br />
You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.<br />
<br />
To create an overlay image, issue a command like:<br />
<br />
$ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''<br />
<br />
After that you can run your QEMU VM as usual (see [[#Running virtualized system]]):<br />
<br />
$ qemu-system-x86_64 ''img1.cow''<br />
<br />
The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.<br />
<br />
When the path to the backing image changes, repair is required.<br />
<br />
{{Warning|The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.}}<br />
<br />
Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:<br />
<br />
$ qemu-img rebase -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:<br />
<br />
$ qemu-img rebase -u -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
==== Resizing an image ====<br />
<br />
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. For full explanation and workaround see [http://tjworld.net/wiki/Howto/ResizeQemuDiskImages].}}<br />
<br />
The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GB, run:<br />
<br />
$ qemu-img resize ''disk_image'' +10G<br />
<br />
After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. When shrinking a disk image, you must '''first reduce the allocated file systems and partition sizes''' using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly, otherwise shrinking the disk image will result in data loss!<br />
<br />
=== Preparing the installation media ===<br />
<br />
To install an operating system into your disk image, you need the installation medium (e.g. optical disk, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.<br />
<br />
{{Tip|If using an optical disk, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso''}}}}<br />
<br />
=== Installing the operating system===<br />
<br />
This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.<br />
<br />
For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:<br />
<br />
$ qemu-system-x86_64 -cdrom ''iso_image'' -boot order=d -drive file=''disk_image'',format=raw<br />
<br />
See {{man|1|qemu}} for more information about loading other media types (such as floppy, disk images or physical drives) and [[#Running virtualized system]] for other useful options.<br />
<br />
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).<br />
<br />
{{Warning|By default only 128 MB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}<br />
<br />
{{Tip|<br />
* Instead of specifying {{ic|1=-boot order=x}}, some users may feel more comfortable using a boot menu: {{ic|1=-boot menu=on}}, at least during configuration and experimentation.<br />
* If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{ic|Ctrl+Alt+2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{ic|Ctrl+Alt+1}} to go back to the virtual machine.}}<br />
<br />
== Running virtualized system ==<br />
<br />
{{ic|qemu-system-*}} binaries (for example {{ic|qemu-system-i386}} or {{ic|qemu-system-x86_64}}, depending on guest's architecture) are used to run the virtualized guest. The usage is:<br />
<br />
$ qemu-system-x86_64 ''options'' ''disk_image''<br />
<br />
Options are the same for all {{ic|qemu-system-*}} binaries, see {{man|1|qemu}} for documentation of all options.<br />
<br />
By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt+g}}.<br />
<br />
{{Warning|QEMU should never be run as root. If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}<br />
<br />
=== Enabling KVM ===<br />
<br />
KVM must be supported by your processor and kernel, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.<br />
<br />
To start QEMU in KVM mode, append {{ic|-enable-kvm}} to the additional start options. To check if KVM is enabled for a running VM, enter the QEMU [https://en.wikibooks.org/wiki/QEMU/Monitor Monitor] using {{ic|Ctrl+Alt+Shift+2}}, and type {{ic|info kvm}}.<br />
<br />
{{Note|<br />
* If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.<br />
* KVM needs to be enabled in order to start Windows 7 and Windows 8 properly without a ''blue screen''.<br />
}}<br />
<br />
=== Enabling IOMMU (Intel VT-d/AMD-Vi) support ===<br />
Using IOMMU opens to features like PCI passthrough and memory protection from faulty or malicious devices, see [[wikipedia:Input-output memory management unit#Advantages]] and [https://www.quora.com/Memory-Management-computer-programming/Could-you-explain-IOMMU-in-plain-English Memory Management (computer programming): Could you explain IOMMU in plain English?].<br />
<br />
To enable IOMMU:<br />
#Ensure that AMD-Vi/Intel VT-d is supported by the CPU and is enabled in the BIOS settings.<br />
#Set the correct [[kernel parameter]] based on the CPU-vendor:<br />
#*Intel - {{ic|1=intel_iommu=on}} or {{ic|1=intel_iommu=pt}}<br />
#*AMD - {{ic|1=amd_iommu=on}}<br />
#Reboot and ensure IOMMU is enabled by checking {{ic|dmesg}} for {{ic|DMAR}}: {{ic|[0.000000] DMAR: IOMMU enabled}}<br />
#Add {{ic|-device intel-iommu}} to create the IOMMU device:<br />
<br />
$ qemu-system-x86_64 '''-enable-kvm -machine q35,accel=kvm -device intel-iommu''' -cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time ..<br />
<br />
{{Note|<br />
On Intel CPU based systems creating an IOMMU device in a QEMU guest with {{ic|-device intel-iommu}} will disable PCI passthrough with an error like: {{bc|Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation}} While adding the kernel parameter {{ic|1=intel_iommu=on}} is still needed for remapping IO (e.g. [[PCI_passthrough_via_OVMF#Using_vfio-pci|PCI passthrough with vfio-pci]]), {{ic|-device intel-iommu}} should not be set if PCI PCI passthrough is required.<br />
}}<br />
<br />
== Moving data between host and guest OS ==<br />
<br />
=== Network ===<br />
<br />
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network Block Device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.<br />
<br />
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.<br />
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).<br />
<br />
=== QEMU's built-in SMB server ===<br />
<br />
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] with an automatically generated {{ic|smb.conf}} file located at {{ic|/tmp/qemu-smb.''pid''-0/smb.conf}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and this is not necessarily very useful since the guest can also access the normal [[Samba]] service on the host if you have set up shares on it.<br />
<br />
To enable this feature, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -net nic -net user,smb=''shared_dir_path''<br />
<br />
where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.<br />
<br />
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.<br />
<br />
{{Note|<br />
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.<br />
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled] and that a firewall does not block [http://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.<br />
}}<br />
<br />
=== Mounting a partition inside a raw disk image ===<br />
<br />
When the virtual machine is not running, it is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices. This does not work with disk images in special formats, such as qcow2, although those can be mounted using {{ic|qemu-nbd}}.<br />
<br />
{{Warning|You must make sure to unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}<br />
<br />
==== With manually specifying byte offset ====<br />
<br />
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:<br />
<br />
# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''<br />
<br />
The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.<br />
<br />
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.<br />
<br />
==== With loop module autodetecting partitions ====<br />
<br />
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:<br />
<br />
* Get rid of all your loopback devices (unmount all mounted images, etc.).<br />
* [[Kernel_modules#Manual_module_handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.<br />
<br />
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}<br />
<br />
Set up your image as a loopback device:<br />
<br />
# losetup -f -P ''disk_image''<br />
<br />
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:<br />
<br />
# mount /dev/loop0p1 ''mountpoint''<br />
<br />
To mount the disk image with ''udisksctl'', see [[Udisks#Mount loop devices]].<br />
<br />
==== With kpartx ====<br />
<br />
'''kpartx''' from the {{AUR|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:<br />
# kpartx -a ''disk_image''<br />
<br />
This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.<br />
<br />
=== Mounting a partition inside a qcow2 image ===<br />
<br />
You may mount a partition inside a qcow2 image using {{ic|qemu-nbd}}. See [http://en.wikibooks.org/wiki/QEMU/Images#Mounting_an_image_on_the_host Wikibooks].<br />
<br />
=== Using any real partition as the single primary partition of a hard disk image ===<br />
<br />
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.<br />
<br />
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.<br />
<br />
{{Warning|<br />
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.<br />
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.<br />
}}<br />
<br />
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.<br />
<br />
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a file system and not as a partitioned device with a MBR. Such a virtual machine can be booted either by specifying the [[kernel]] and [[initramfs|initrd]] manually, or by simulating a disk with a MBR by using linear [[RAID]].<br />
<br />
==== By specifying kernel and initrd manually ====<br />
<br />
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing bootloaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:<br />
<br />
{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}<br />
<br />
$ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3<br />
<br />
In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.<br />
<br />
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.<br />
<br />
When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:<br />
<br />
... -append 'root=/dev/sda1 console=ttyS0'<br />
<br />
==== Simulate virtual disk with MBR using linear RAID ====<br />
<br />
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate a MBR for it so that it can boot using a bootloader such as GRUB.<br />
<br />
You can do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.<br />
<br />
Suppose you have a plain, unmounted {{ic|/dev/hdaN}} partition with some file system on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=32<br />
<br />
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:<br />
<br />
# losetup -f ''/path/to/mbr''<br />
<br />
Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hdaN}} disk image using software RAID:<br />
<br />
# modprobe linear<br />
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''<br />
<br />
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:<br />
<br />
# fdisk /dev/md0<br />
<br />
Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.<br />
<br />
Now, press {{ic|R}} to return to the main menu.<br />
<br />
Press {{ic|P}} and check that the cylinder size is now 16k.<br />
<br />
Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.<br />
<br />
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:<br />
<br />
$ qemu-system-x86_64 -hdc /dev/md0 ''[...]''<br />
<br />
You can, of course, safely set any bootloader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.<br />
<br />
===== Alternative: use nbd-server =====<br />
Instead of linear RAID, you may use {{ic|nbd-server}} (from the {{pkg|nbd}} package) to create an MBR wrapper for QEMU.<br />
<br />
Assuming you have already set up your MBR wrapper file like above, rename it to {{ic|wrapper.img.0}}. Then create a symbolic link named {{ic|wrapper.img.1}} in the same directory, pointing to your partition. Then put the following script in the same directory:<br />
<br />
#!/bin/sh<br />
dir="$(realpath "$(dirname "$0")")"<br />
cat >wrapper.conf <<EOF<br />
[generic]<br />
allowlist = true<br />
listenaddr = 127.713705<br />
port = 10809<br />
<br />
[wrap]<br />
exportname = $dir/wrapper.img<br />
multifile = true<br />
EOF<br />
<br />
nbd-server \<br />
-C wrapper.conf \<br />
-p wrapper.pid \<br />
"$@"<br />
<br />
The {{ic|.0}} and {{ic|.1}} suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:<br />
<br />
qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap ''[...]''<br />
<br />
== Networking ==<br />
<br />
{{Poor writing|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}<br />
<br />
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.<br />
<br />
In addition, networking performance can be improved by assigning virtual machines a [http://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.<br />
<br />
=== Link-level address caveat ===<br />
<br />
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.<br />
<br />
Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''<br />
<br />
Generating unique link-level addresses can be done in several ways:<br />
<br />
<ol><br />
<li>Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.<br />
</li><br />
<li>Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:<br />
<br />
{{bc|1=<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde ''disk_image''<br />
}}<br />
<br />
</li><br />
<li>Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.<br />
<br />
{{hc|qemu-mac-hasher.py|<nowiki><br />
#!/usr/bin/env python<br />
<br />
import sys<br />
import zlib<br />
<br />
if len(sys.argv) != 2:<br />
print("usage: %s <VM Name>" % sys.argv[0])<br />
sys.exit(1)<br />
<br />
crc = zlib.crc32(sys.argv[1].encode("utf-8")) & 0xffffffff<br />
crc = str(hex(crc))[2:]<br />
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))<br />
</nowiki>}}<br />
<br />
In a script, you can use for example:<br />
<br />
vm_name="''VM Name''"<br />
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''<br />
</li><br />
</ol><br />
<br />
=== User-mode networking ===<br />
<br />
By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.<br />
<br />
{{warning|This only works with the TCP and UDP protocols, so ICMP, including {{ic|ping}}, will not work. Do not use {{ic|ping}} to test network connectivity.}}<br />
<br />
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.<br />
<br />
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.<br />
<br />
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.<br />
<br />
{{Note|If the host system uses [[systemd-networkd]], make sure to symlink the {{ic|/etc/resolv.conf}} file as described in [[systemd-networkd#Required services and setup]], otherwise the DNS lookup in the guest system will not work.}}<br />
<br />
=== Tap networking with QEMU ===<br />
<br />
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.<br />
<br />
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.<br />
<br />
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.<br />
<br />
{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}<br />
<br />
As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:<br />
<br />
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no<br />
<br />
But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:<br />
<br />
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no,vhost=on<br />
<br />
See http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-net for more information.<br />
<br />
==== Host-only networking ====<br />
<br />
If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].<br />
<br />
{{Tip|<br />
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.<br />
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:<br />
# ip addr add 172.20.0.1/16 dev br0<br />
# ip link set br0 up<br />
# dnsmasq --interface&#61;br0 --bind-interfaces --dhcp-range&#61;172.20.0.2,172.20.255.254<br />
}}<br />
<br />
==== Internal networking ====<br />
<br />
If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.<br />
<br />
By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:<br />
<br />
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Bridged networking using qemu-bridge-helper ====<br />
<br />
{{Note|This method is available since QEMU 1.1, see http://wiki.qemu.org/Features/HelperNetworking.}}<br />
<br />
This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.<br />
<br />
{{Tip|See [[Network bridge]] for information on creating bridge.}}<br />
<br />
First, create a configuration file containing the names of all bridges to be used by QEMU:<br />
<br />
{{hc|/etc/qemu/bridge.conf|<br />
allow ''bridge0''<br />
allow ''bridge1''<br />
...}}<br />
<br />
Now start the VM. The most basic usage would be:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' ''[...]''<br />
<br />
With multiple taps, the most basic usage requires specifying the VLAN for all additional NICs:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' -net nic,vlan=1 -net bridge,vlan=1,br=''bridge1'' ''[...]''<br />
<br />
==== Creating bridge manually ====<br />
<br />
{{Poor writing|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
{{Tip|Since QEMU 1.1, the [http://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}<br />
<br />
The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.<br />
<br />
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.<br />
<br />
* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.<br />
<br />
* Enable IPv4 forwarding:<br />
# sysctl net.ipv4.ip_forward=1<br />
<br />
To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.<br />
<br />
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.<br />
<br />
* Now create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name.<br />
<br />
* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifup|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifup"<br />
echo "Bringing up $1 for bridged mode..."<br />
sudo /usr/bin/ip link set $1 up promisc on<br />
echo "Adding $1 to br0..."<br />
sudo /usr/bin/brctl addif br0 $1<br />
sleep 2<br />
</nowiki>}}<br />
<br />
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifdown|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifdown"<br />
sudo /usr/bin/ip link set $1 down<br />
sudo /usr/bin/brctl delif br0 $1<br />
sudo /usr/bin/ip link delete dev $1<br />
</nowiki>}}<br />
<br />
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:<br />
{{bc|<nowiki><br />
Cmnd_Alias QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl<br />
%kvm ALL=NOPASSWD: QEMU<br />
</nowiki>}}<br />
<br />
* You launch QEMU using the following {{ic|run-qemu}} script:<br />
{{hc|run-qemu|<nowiki><br />
#!/bin/bash<br />
USERID=$(whoami)<br />
<br />
# Get name of newly created TAP device; see https://bbs.archlinux.org/viewtopic.php?pid=1285079#p1285079<br />
precreationg=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
sudo /usr/bin/ip tuntap add user $USERID mode tap<br />
postcreation=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
IFACE=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))<br />
<br />
# This line creates a random MAC address. The downside is the DHCP server will assign a different IP address each time<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
# Instead, uncomment and edit this line to set a static MAC address. The benefit is that the DHCP server will assign the same IP address.<br />
# macaddr='52:54:be:36:42:a9'<br />
<br />
qemu-system-x86_64 -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*<br />
<br />
sudo ip link set dev $IFACE down &> /dev/null<br />
sudo ip tuntap del $IFACE mode tap &> /dev/null<br />
</nowiki>}}<br />
<br />
Then to launch a VM, do something like this<br />
$ run-qemu -hda ''myvm.img'' -m 512 -vga std<br />
<br />
* It is recommended for performance and security reasons to disable the [http://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:<br />
{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki><br />
net.bridge.bridge-nf-call-ip6tables = 0<br />
net.bridge.bridge-nf-call-iptables = 0<br />
net.bridge.bridge-nf-call-arptables = 0<br />
</nowiki>}}<br />
Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.<br />
<br />
See the [http://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel modules#Automatic module handling]].<br />
<br />
Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:<br />
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Network sharing between physical device and a Tap device through iptables ====<br />
<br />
{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}<br />
<br />
Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.<br />
<br />
See [[Network bridge#Wireless interface on a bridge]] as a reference.<br />
<br />
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.<br />
<br />
See [[Internet sharing]] as a reference.<br />
<br />
There you can find what is needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.<br />
<br />
To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside /etc/sysctl.d:<br />
<br />
net.ipv4.ip_forward = 1<br />
net.ipv6.conf.default.forwarding = 1<br />
net.ipv6.conf.all.forwarding = 1<br />
<br />
The iptables rules can look like:<br />
<br />
# Forwarding from/to outside<br />
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT<br />
# NAT/Masquerade (network address translation)<br />
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE<br />
<br />
The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:<br />
<br />
INT=tap0<br />
EXT_0=eth0<br />
EXT_1=wlan0<br />
EXT_2=tun0<br />
<br />
The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.<br />
<br />
The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge does not include any of that.<br />
<br />
Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.<br />
<br />
=== Networking with VDE2 ===<br />
<br />
{{Poor writing|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
==== What is VDE? ====<br />
<br />
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.<br />
<br />
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [http://wiki.virtualsquare.org/wiki/index.php/Main_Page the documentation of the project].<br />
<br />
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.<br />
<br />
==== Basics ====<br />
<br />
VDE support can be [[pacman|installed]] via the {{Pkg|vde2}} package.<br />
<br />
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):<br />
<br />
# modprobe tun<br />
<br />
Now create the virtual switch:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
<br />
This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.<br />
<br />
The interface is plugged in but not configured yet. To configure it, run this command:<br />
<br />
# ip addr add 192.168.100.254/24 dev tap0<br />
<br />
Now, you just have to run KVM with these {{ic|-net}} options as a normal user:<br />
<br />
$ qemu-system-x86_64 -net nic -net vde -hda ''[...]''<br />
<br />
Configure networking for your guest as you would do in a physical network.<br />
<br />
{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}<br />
<br />
==== Startup scripts ====<br />
<br />
Example of main script starting VDE:<br />
<br />
{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki><br />
#!/bin/sh<br />
# QEMU/VDE network environment preparation script<br />
<br />
# The IP configuration for the tap device that will be used for<br />
# the virtual machine network:<br />
<br />
TAP_DEV=tap0<br />
TAP_IP=192.168.100.254<br />
TAP_MASK=24<br />
TAP_NETWORK=192.168.100.0<br />
<br />
# Host interface<br />
NIC=eth0<br />
<br />
case "$1" in<br />
start)<br />
echo -n "Starting VDE network for QEMU: "<br />
<br />
# If you want tun kernel module to be loaded by script uncomment here<br />
#modprobe tun 2>/dev/null<br />
## Wait for the module to be loaded<br />
#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done<br />
<br />
# Start tap switch<br />
vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users<br />
<br />
# Bring tap interface up<br />
ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"<br />
ip link set "$TAP_DEV" up<br />
<br />
# Start IP Forwarding<br />
echo "1" > /proc/sys/net/ipv4/ip_forward<br />
iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
;;<br />
stop)<br />
echo -n "Stopping VDE network for QEMU: "<br />
# Delete the NAT rules<br />
iptables -t nat -D POSTROUTING "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
<br />
# Bring tap interface down<br />
ip link set "$TAP_DEV" down<br />
<br />
# Kill VDE switch<br />
pgrep -f vde_switch | xargs kill -TERM<br />
;;<br />
restart|reload)<br />
$0 stop<br />
sleep 1<br />
$0 start<br />
;;<br />
*)<br />
echo "Usage: $0 {start|stop|restart|reload}"<br />
exit 1<br />
esac<br />
exit 0<br />
</nowiki>}}<br />
<br />
Example of systemd service using the above script:<br />
<br />
{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki><br />
[Unit]<br />
Description=Manage VDE Switch<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/etc/systemd/scripts/qemu-network-env start<br />
ExecStop=/etc/systemd/scripts/qemu-network-env stop<br />
RemainAfterExit=yes<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
Change permissions for {{ic|qemu-network-env}} to be executable<br />
<br />
# chmod u+x /etc/systemd/scripts/qemu-network-env<br />
<br />
You can [[start]] {{ic|qemu-network-env.service}} as usual.<br />
<br />
====Alternative method====<br />
<br />
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.<br />
<br />
# vde_switch -daemon -mod 660 -group users<br />
# slirpvde --dhcp --daemon<br />
<br />
Then, to start the VM with a connection to the network of the host:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''<br />
<br />
=== VDE2 Bridge ===<br />
<br />
Based on [http://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.<br />
<br />
==== Basics ====<br />
<br />
Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.<br />
<br />
Create the vde2/tap device:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
# ip link set tap0 up<br />
<br />
Create bridge:<br />
<br />
# brctl addbr br0<br />
<br />
Add devices:<br />
<br />
# brctl addif br0 eth0<br />
# brctl addif br0 tap0<br />
<br />
And configure bridge interface:<br />
<br />
# dhcpcd br0<br />
<br />
==== Startup scripts ====<br />
<br />
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:<br />
<br />
{{hc|/etc/netctl/ethernet-noip|<nowiki><br />
Description='A more versatile static Ethernet connection'<br />
Interface=eth0<br />
Connection=ethernet<br />
IP=no<br />
</nowiki>}}<br />
<br />
The following custom systemd service can be used to create and activate a VDE2 tap interface for use in the {{ic|users}} user group.<br />
<br />
{{hc|/etc/systemd/system/vde2@.service|<nowiki><br />
[Unit]<br />
Description=Network Connectivity for %i<br />
Wants=network.target<br />
Before=network.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users<br />
ExecStart=/usr/bin/ip link set dev %i up<br />
ExecStop=/usr/bin/ip addr flush dev %i<br />
ExecStop=/usr/bin/ip link set dev %i down<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].<br />
<br />
== Graphics ==<br />
<br />
QEMU can use the following different graphic outputs: {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} and {{ic|none}}.<br />
<br />
=== std ===<br />
<br />
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.<br />
<br />
=== qxl ===<br />
<br />
QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use SPICE for improved graphical performance when using QXL.<br />
<br />
On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.<br />
<br />
==== SPICE ====<br />
The [http://spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way.<br />
<br />
SPICE can only be used when using QXL as the graphical output.<br />
<br />
The following is example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:<br />
<br />
$ qemu-system-x86_64 -vga qxl -spice port=5930,disable-ticketing -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
From the [https://www.linux-kvm.org/page/SPICE SPICE page on the KVM wiki]: "''The {{ic|-device virtio-serial-pci}} option adds the virtio-serial device, {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in that device and {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port. It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.''"<br />
<br />
{{Tip|Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the {{ic|-daemonize}} parameter.}}<br />
<br />
Connect to the guest by using a SPICE client. {{pkg|virt-viewer}} is the recommended SPICE client by the protocol developers:<br />
<br />
$ remote-viewer spice://127.0.0.1:5930<br />
<br />
The reference and test implementation {{Pkg|spice-gtk3}} can also be used:<br />
<br />
$ spicy -h 127.0.0.1 -p 5930<br />
<br />
Other [http://www.spice-space.org/download.html clients], including for other platforms, are also available.<br />
<br />
Using [[wikipedia:Unix_socket|Unix sockets]] instead of TCP ports does not involve using network stack on the host system, so it is [https://unix.stackexchange.com/questions/91774/performance-of-unix-sockets-vs-tcp-ports reportedly] better for performance. Example:<br />
$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent -spice unix,addr=/tmp/vm_spice.socket,disable-ticketing<br />
<br />
Then connect via:<br />
<br />
$ remote-viewer spice+unix:///tmp/vm_spice.socket<br />
<br />
or via:<br />
<br />
$ spicy --uri="spice+unix:///tmp/vm_spice.socket"<br />
<br />
For improved support for multiple monitors, clipboard sharing, etc. the following packages should be installed on the guest:<br />
* {{Pkg|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more<br />
* {{Pkg|xf86-video-qxl}} {{AUR|xf86-video-qxl-git}}: Xorg X11 qxl video driver<br />
* For other operating systems, see the Guest section on [http://www.spice-space.org/download.html SPICE-Space download] page.<br />
<br />
Enable {{ic|spice-vdagentd.service}} after installation.<br />
<br />
===== Password authentication with SPICE =====<br />
If you want to enable password authentication with SPICE you need to remove {{ic|disable-ticketing}} from the {{ic|-spice}} argument and instead add {{ic|1=password=''yourpassword''}}. For example:<br />
<br />
$ qemu-system-x86_64 -vga qxl -spice port=5900,password=''yourpassword'' -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
Your SPICE client should now ask for the password to be able to connect to the SPICE server.<br />
<br />
===== TLS encryption =====<br />
<br />
You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):<br />
* {{ic|ca-cert.pem}}: the CA master certificate.<br />
* {{ic|server-cert.pem}}: the server certificate signed with {{ic|ca-cert.pem}}.<br />
* {{ic|server-key.pem}}: the server private key.<br />
<br />
An example of generation of self-signed certificates with your own generated CA for your server is shown in the [https://www.spice-space.org/spice-user-manual.html#_generating_self_signed_certificates Spice User Manual].<br />
<br />
Afterwards, you can run QEMU with SPICE as explained above but using the following {{ic|-spice}} argument: {{ic|1=-spice tls-port=5901,password=''yourpassword'',x509-dir=''/path/to/pki_certs''}}, where {{ic|''/path/to/pki_certs''}} is the directory path that contains the three needed files shown earlier.<br />
<br />
It is now possible to connect to the server using {{pkg|virt-viewer}}:<br />
<br />
$ remote-viewer spice://''hostname''?tls-port=5901 --spice-ca-file=''/path/to/ca-cert.pem'' --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
Keep in mind that the {{ic|--spice-host-subject}} parameter needs to be set according to your {{ic|server-cert.pem}} subject. You also need to copy {{ic|ca-cert.pem}} to every client to verify the server certificate.<br />
<br />
{{Tip|You can get the subject line of the server certificate in the correct format for {{ic|--spice-host-subject}} (with entries separated by commas) using the following command: {{bc|<nowiki>$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'</nowiki>}}<br />
}}<br />
<br />
The equivalent {{pkg|spice-gtk3}} command is:<br />
<br />
$ spicy -h ''hostname'' -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
=== vmware ===<br />
<br />
Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers {{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.<br />
<br />
=== virtio ===<br />
<br />
{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests with {{Pkg|mesa}} (>=11.2) compiled with the option {{ic|1=--with-gallium-drivers=virgl}}.<br />
<br />
To enable 3D acceleration on the guest system select this vga with {{ic|-vga virtio}} and enable the opengl context in the display device with {{ic|1=-display sdl,gl=on}} or {{ic|1=-display gtk,gl=on}} for the sdl and gtk display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:<br />
<br />
{{hc|$ dmesg {{!}} grep drm |<br />
[drm] pci: virtio-vga detected<br />
[drm] virgl 3d acceleration enabled<br />
}}<br />
<br />
As of September 2016, support for the spice protocol is under development and can be tested installing the development release of {{Pkg|spice}} (>= 0.13.2) and recompiling qemu.<br />
<br />
For more information visit [https://www.kraxel.org/blog/tag/virgl/ kraxel's blog].<br />
<br />
=== cirrus ===<br />
<br />
The cirrus graphical adapter was the default [http://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.<br />
<br />
=== none ===<br />
<br />
This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.<br />
<br />
=== vnc ===<br />
<br />
Given that you used the {{ic|-nographic}} option, you can add the {{ic|-vnc display}} option to have QEMU listen on {{ic|display}} and redirect the VGA display to the VNC session. There is an example of this in the [[#Starting QEMU virtual machines on boot]] section's example configs.<br />
<br />
$ qemu-system-x86_64 -vga std -nographic -vnc :0<br />
$ gvncviewer :0<br />
<br />
When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [http://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.<br />
<br />
== Audio ==<br />
<br />
=== Host ===<br />
<br />
The audio driver used by QEMU is set with the {{ic|QEMU_AUDIO_DRV}} environment variable:<br />
<br />
$ export QEMU_AUDIO_DRV=pa<br />
<br />
Run the following command to get QEMU's configuration options related to PulseAudio:<br />
<br />
$ qemu-system-x86_64 -audio-help | awk '/Name: pa/' RS=<br />
<br />
The listed options can be exported as environment variables, for example:<br />
<br />
{{bc|1=<br />
$ export QEMU_PA_SINK=alsa_output.pci-0000_04_01.0.analog-stereo.monitor<br />
$ export QEMU_PA_SOURCE=input<br />
}}<br />
<br />
=== Guest ===<br />
To get list of the supported emulation audio drivers:<br />
$ qemu-system-x86_64 -soundhw help<br />
<br />
To use e.g. {{ic|hda}} driver for the guest use the {{ic|-soundhw hda}} command with QEMU.<br />
<br />
{{Note|Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with {{ic|<nowiki>qemu-system-x86_64 -h | grep vga</nowiki>}}.}}<br />
<br />
== Installing virtio drivers ==<br />
<br />
QEMU offers guests the ability to use paravirtualized block and network devices using the [http://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.<br />
<br />
* A virtio block device requires the option {{Ic|-drive}} instead of the simple {{Ic|-hd*}} plus {{Ic|1=if=virtio}}:<br />
$ qemu-system-x86_64 -boot order=c -drive file=''disk_image'',if=virtio<br />
<br />
{{Note|{{Ic|1=-boot order=c}} is absolutely necessary when you want to boot from it. There is no auto-detection as with {{Ic|-hd*}}.}}<br />
<br />
* Almost the same goes for the network:<br />
$ qemu-system-x86_64 -net nic,model=virtio<br />
<br />
{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}<br />
<br />
=== Preparing an (Arch) Linux guest ===<br />
<br />
To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{Ic|virtio}}, {{Ic|virtio_pci}}, {{Ic|virtio_blk}}, {{Ic|virtio_net}}, and {{Ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.<br />
<br />
If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
MODULES="virtio virtio_blk virtio_pci virtio_net"}}<br />
<br />
Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.<br />
<br />
{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and bootloader, nothing has to be done.}}<br />
<br />
Further information on paravirtualization with KVM can be found [http://www.linux-kvm.org/page/Boot_from_virtio_block_device here].<br />
<br />
You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities. After installing the package you can enable and start the {{ic|qemu-ga.service}}.<br />
<br />
=== Preparing a Windows guest ===<br />
<br />
{{Note|1=The only (reliable) way to upgrade a Windows 8.1 guest to Windows 10 seems to be to temporarily choose cpu core2duo,nx for the install [http://ubuntuforums.org/showthread.php?t=2289210]. After the install, you may revert to other cpu settings (8/8/2015).}}<br />
<br />
==== Block device drivers ====<br />
<br />
===== New Install of Windows =====<br />
<br />
Windows does not come with the virtio drivers. Therefore, you will need to load them during installation. There are basically two ways to do this: via Floppy Disk or via ISO files. Both images can be downloaded from the [https://fedoraproject.org/wiki/Windows_Virtio_Drivers Fedora repository].<br />
<br />
The floppy disk option is difficult because you will need to press F6 (Shift-F6 on newer Windows) at the very beginning of powering on the QEMU. This is difficult since you need time to connect your VNC console window. You can attempt to add a delay to the boot sequence. See {{man|1|qemu}} for more details about applying a delay at boot.<br />
<br />
The ISO option to load drivers is the preferred way, but it is available only on Windows Vista and Windows Server 2008 and later. The procedure is to load the image with virtio drivers in an additional cdrom device along with the primary disk device and Windows installer:<br />
<br />
$ qemu-system-x86_64 ... \<br />
-drive file=''/path/to/primary/disk.img'',index=0,media=disk,if=virtio \<br />
-drive file=''/path/to/installer.iso'',index=2,media=cdrom \<br />
-drive file=''/path/to/virtio.iso'',index=3,media=cdrom \<br />
...<br />
<br />
During the installation, the Windows installer will ask you for your Product key and perform some additional checks. When it gets to the "Where do you want to install Windows?" screen, it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).<br />
<br />
* Select the option {{ic|Load Drivers}}.<br />
* Uncheck the box for "Hide drivers that aren't compatible with this computer's hardware".<br />
* Click the Browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".<br />
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and press OK.<br />
* Click Next<br />
<br />
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.<br />
<br />
===== Change Existing Windows VM to use virtio =====<br />
Modifying an existing Windows guest for booting from virtio disk is a bit tricky.<br />
<br />
You can download the virtio disk driver from the [https://fedoraproject.org/wiki/Windows_Virtio_Drivers Fedora repository].<br />
<br />
Now you need to create a new disk image, which will force Windows to search for the driver. For example:<br />
<br />
$ qemu-img create -f qcow2 ''fake.qcow2'' 1G<br />
<br />
Run the original Windows guest (with the boot disk still in IDE mode) with the fake disk (in virtio mode) and a CD-ROM with the driver.<br />
<br />
$ qemu-system-x86_64 -m 512 -vga std -drive file=''windows_disk_image'',if=ide -drive file=''fake.qcow2'',if=virtio -cdrom virtio-win-0.1-81.iso<br />
<br />
Windows will detect the fake disk and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
When the installation is successful, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:<br />
<br />
$ qemu-system-x86_64 -m 512 -vga std -drive file=''windows_disk_image'',if=virtio<br />
<br />
{{Note|If you encounter the Blue Screen of Death, make sure you did not forget the {{ic|-m}} parameter, and that you do not boot with virtio instead of ide for the system drive before drivers are installed.}}<br />
<br />
==== Network drivers ====<br />
<br />
Installing virtio network drivers is a bit easier, simply add the {{ic|-net}} argument as explained above.<br />
<br />
$ qemu-system-x86_64 -m 512 -vga std -drive file=''windows_disk_image'',if=virtio -net nic,model=virtio -cdrom virtio-win-0.1-74.iso<br />
<br />
Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
==== Balloon driver ====<br />
<br />
If you want to track you guest memory state (for example via {{ic|virsh}} command {{ic|dommemstat}}) or change guest's memory size in runtime (you still won't be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.<br />
<br />
For this you will need to go to ''Device Manager'', locate ''PCI standard RAM Controller'' in ''System devices'' (or unrecognized PCI controller from ''Other devices'') and choose ''Update driver''. In opened window you will need to choose ''Browse my computer...'' and select the CD-ROM (and don't forget the ''Include subdirectories'' checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command {{ic|balloon ''memory_size''}}, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to ''memory_size''). However, you still won't be able to track guest memory state. In order to do this you will need to install ''Balloon'' service properly. For that open command line as administrator, go to the CD-ROM, ''Balloon'' directory and deeper, depending on your system and architecture. Once you are in ''amd64'' (''x86'') directory, run {{ic|blnsrv.exe -i}} which will do the installation. After that {{ic|virsh}} command {{ic|dommemstat}} should be outputting all supported values.<br />
<br />
=== Preparing a FreeBSD guest ===<br />
<br />
Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:<br />
<br />
{{bc|<nowiki><br />
virtio_loader="YES"<br />
virtio_pci_load="YES"<br />
virtio_blk_load="YES"<br />
if_vtnet_load="YES"<br />
virtio_balloon_load="YES"<br />
</nowiki>}}<br />
<br />
Then modify your {{ic|/etc/fstab}} by doing the following:<br />
<br />
{{bc|<nowiki><br />
sed -i bak "s/ada/vtbd/g" /etc/fstab<br />
</nowiki>}}<br />
<br />
And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.<br />
<br />
== QEMU Monitor ==<br />
<br />
While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU Monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run {{ic|help}} or {{ic|?}} in the QEMU monitor console or review the relevant section of the [http://download.qemu-project.org/qemu-doc.html#pcsys_005fmonitor official QEMU documentation].<br />
<br />
=== Accessing the monitor console ===<br />
<br />
When using the {{ic|std}} default graphics option, one can access the QEMU Monitor by pressing {{ic|Ctrl+Alt+2}} or by clicking ''View > compatmonitor0'' in the QEMU window. To return to the virtual machine graphical view either press {{ic|Ctrl+Alt+1}} or click ''View > VGA''.<br />
<br />
However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports. Alternative options of accessing the monitor are described below:<br />
<br />
* [[telnet]]: Run QEMU with the {{ic|-monitor telnet:127.0.0.1:''port'',server,nowait}} parameter. When the virtual machine is started you will be able to access the monitor via telnet:<br />
$ telnet 127.0.0.1 ''port''<br />
{{Note|If {{ic|127.0.0.1}} is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen {{ic|0.0.0.0}} as follows: {{ic|-monitor telnet:0.0.0.0:''port'',server,nowait}}. Keep in mind that it is recommended to have a [[firewall]] configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.}}<br />
<br />
* UNIX socket: Run QEMU with the {{ic|-monitor unix:''socketfile'',server,nowait}} parameter. Then you can connect with either {{pkg|socat}} or {{pkg|openbsd-netcat}}.<br />
<br />
For example, if QEMU is run via:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -monitor unix:/tmp/monitor.sock,server,nowait ''[...]''<br />
<br />
It is possible to connect to the monitor with:<br />
<br />
$ socat - UNIX-CONNECT:/tmp/monitor.sock<br />
<br />
Or with:<br />
<br />
$ nc -U /tmp/monitor.sock<br />
<br />
* TCP: You can expose the monitor over TCP with the argument {{ic|-monitor tcp:127.0.0.1:''port'',server,nowait}}. Then connect with netcat, either {{pkg|openbsd-netcat}} or {{pkg|gnu-netcat}} by running:<br />
<br />
$ nc 127.0.0.1 ''port''<br />
<br />
{{Note|In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to {{ic|0.0.0.0}} like explained in the telnet case. The same security warnings apply in this case as well.}}<br />
<br />
* Standard I/O: It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument {{ic|-monitor stdio}}.<br />
<br />
=== Sending keyboard presses to the virtual machine using the monitor console ===<br />
<br />
Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the {{ic|Ctrl+Alt+F*}} key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the {{ic|sendkey}} command to forward the necessary keypresses to the virtual machine. For example:<br />
<br />
(qemu) sendkey ctrl-alt-f2<br />
<br />
=== Creating and managing snapshots via the monitor console ===<br />
<br />
{{Note|This feature will '''only''' work when the virtual machine disk image is in ''qcow2'' format. It will not work with ''raw'' images.}}<br />
<br />
It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.<br />
<br />
* Use {{ic|savevm ''name''}} in order to create a snapshot with the tag ''name''.<br />
* Use {{ic|loadvm ''name''}} to revert the virtual machine to the state of the snapshot ''name''.<br />
* Use {{ic|delvm ''name''}} to delete the snapshot tagged as ''name''.<br />
* Use {{ic|info snapshots}} to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).<br />
<br />
=== Running the virtual machine in immutable mode ===<br />
<br />
It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the {{ic|-snapshot}} parameter. When the disk image is written by the guest, changes will be saved in a temporary file in {{ic|/tmp}} and will be discarded when QEMU halts.<br />
<br />
However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:<br />
<br />
(qemu) commit<br />
<br />
If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.<br />
<br />
=== Pause and power options via the monitor console ===<br />
<br />
Some operations of a physical machine can be emulated by QEMU using some monitor commands:<br />
<br />
* {{ic|system_powerdown}} will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.<br />
* {{ic|system_reset}} will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.<br />
* {{ic|stop}} will pause the virtual machine.<br />
* {{ic|cont}} will resume a virtual machine previously paused.<br />
<br />
=== Taking screenshots of the virtual machine ===<br />
<br />
Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:<br />
<br />
(qemu) screendump ''file.ppm''<br />
<br />
== Tips and tricks ==<br />
<br />
=== Starting QEMU virtual machines on boot ===<br />
<br />
==== With libvirt ====<br />
<br />
If a virtual machine is set up with [[libvirt]], it can be configured with {{ic|virsh autostart}} or through the ''virt-manager'' GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".<br />
<br />
==== Custom script ====<br />
<br />
To run QEMU VMs on boot, you can use following systemd unit and config.<br />
<br />
{{hc|/etc/systemd/system/qemu@.service|<nowiki><br />
[Unit]<br />
Description=QEMU virtual machine<br />
<br />
[Service]<br />
Environment="type=system-x86_64" "haltcmd=kill -INT $MAINPID"<br />
EnvironmentFile=/etc/conf.d/qemu.d/%i<br />
PIDFile=/tmp/%i.pid<br />
ExecStart=/usr/bin/env qemu-${type} -name %i -nographic -pidfile /tmp/%i.pid $args<br />
ExecStop=/bin/sh -c ${haltcmd}<br />
TimeoutStopSec=30<br />
KillMode=none<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
{{Note|<br />
* According to {{man|5|systemd.service}} and {{ic|5|systemd.kill}} man pages it is necessary to use the {{ic|1=KillMode=none}} option. Otherwise the main qemu process will be killed immediately after the {{ic|ExecStop}} command quits (it simply echoes one string) and your quest system will not be able to shutdown correctly.<br />
* It is necessary to use the {{ic|PIDFile}} option. Otherwise systemd cannot tell whether the main qemu process was terminated and your quest system will not be able to shutdown correctly. On host shutdown it will proceed without waiting for the VM to shutdown.<br />
}}<br />
<br />
Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the following variables set:<br />
<br />
; type<br />
: QEMU binary to call. If specified, will be prepended with {{ic|/usr/bin/qemu-}} and that binary will be used to start the VM. I.e. you can boot e.g. {{ic|qemu-system-arm}} images with {{ic|1=type="system-arm"}}.<br />
; args<br />
: QEMU command line to start with. Will always be prepended with {{ic|-name ${vm} -nographic}}.<br />
; haltcmd<br />
: Command to shut down a VM safely. In this example, the QEMU monitor is exposed via telnet using {{ic|-monitor telnet:..}} and the VMs are powered off via ACPI by sending {{ic|system_powerdown}} to monitor with the {{ic|nc}} command. You can use SSH or some other ways as well.<br />
<br />
Example configs:<br />
<br />
{{hc|/etc/conf.d/qemu.d/one|<nowiki><br />
type="system-x86_64"<br />
<br />
args="-enable-kvm -m 512 -hda /dev/mapper/vg0-vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \<br />
-net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \<br />
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7100" # or netcat/ncat<br />
<br />
# You can use other ways to shut down your VM correctly<br />
#haltcmd="ssh powermanager@vm1 sudo poweroff"<br />
</nowiki>}}<br />
<br />
{{hc|/etc/conf.d/qemu.d/two|<nowiki><br />
args="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \<br />
-net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \<br />
-monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7101"<br />
</nowiki>}}<br />
<br />
To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.<br />
<br />
=== Mouse integration ===<br />
<br />
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options {{ic|-usb -device usb-tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:<br />
<br />
$ qemu-system-x86_64 -hda ''disk_image'' -m 512 -vga std -usb -device usb-tablet<br />
<br />
If that does not work, try the tip at [[#Mouse cursor is jittery or erratic]].<br />
<br />
=== Pass-through host USB device ===<br />
<br />
To access physical USB device connected to host from VM, you can use the option: {{ic|-usbdevice host:''vendor_id'':''product_id''}}.<br />
<br />
You can find {{ic|vendor_id}} and {{ic|product_id}} of your device with {{ic|lsusb}} command.<br />
<br />
Since the default I440FX chipset emulated by qemu feature a single UHCI controller (USB 1), the {{ic|-usbdevice}} option will try to attach your physical device to it. In some cases this may cause issues with newer devices. A possible solution is to emulate the [http://wiki.qemu.org/Features/Q35 ICH9] chipset, which offer an EHCI controller supporting up to 12 devices, using the option {{ic|1=-machine type=q35}}.<br />
<br />
A less invasive solution is to emulate an EHCI (USB 2) or XHCI (USB 3) controller with the option {{ic|1=-device usb-ehci,id=ehci}} or {{ic|1=-device nec-usb-xhci,id=xhci}} respectively and then attach your physical device to it with the option {{ic|1=-device usb-host,..}} as follows:<br />
<br />
-device usb-host,bus='''controller_id'''.0,vendorid=0x'''vendor_id''',productid=0x'''product_id'''<br />
<br />
You can also add the {{ic|1=...,port=''<n>''}} setting to the previous option to specify in which physical port of the virtual controller you want to attach your device, useful in the case you want to add multiple usb devices to the VM.<br />
<br />
{{Note|If you encounter permission errors when running QEMU, see [[Udev#Writing udev rules]] for information on how to set permissions of the device.}}<br />
<br />
=== USB redirection with SPICE ===<br />
<br />
When using [[#SPICE]] it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned {{ic|-usbdevice}} method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.<br />
<br />
We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:<br />
<br />
{{bc|<nowiki>-device ich9-usb-ehci1,id=usb \<br />
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \<br />
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \<br />
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev1 \<br />
-device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev2 \<br />
-device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev3 \<br />
-device usb-redir,chardev=usbredirchardev3,id=usbredirdev3</nowiki>}}<br />
<br />
Both {{ic|spicy}} from {{pkg|spice-gtk3}} (''Input > Select USB Devices for redirection'') and {{ic|remote-viewer}} from {{pkg|virt-viewer}} (''File > USB device selection'') support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the [[#SPICE]] section for more information).<br />
<br />
{{Warning|Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.}}<br />
<br />
=== Enabling KSM ===<br />
<br />
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.<br />
<br />
To enable KSM, simply run<br />
<br />
# echo 1 > /sys/kernel/mm/ksm/run<br />
<br />
To make it permanent, you can use [[systemd#Temporary files|systemd's temporary files]]:<br />
<br />
{{hc|/etc/tmpfiles.d/ksm.conf|<br />
w /sys/kernel/mm/ksm/run - - - - 1<br />
}}<br />
<br />
If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://www.kernel.org/doc/Documentation/vm/ksm.txt for more information.<br />
<br />
{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory: {{bc|$ grep . /sys/kernel/mm/ksm/*}}}}<br />
<br />
=== Multi-monitor support ===<br />
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.<br />
<br />
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.<br />
<br />
=== Copy and paste ===<br />
<br />
To have copy and paste between the host and the guest you need to enable the spice agent communication channel. It requires to add a virtio-serial device to the guest, and open a port for the spice vdagent. It is also required to install the spice vdagent in guest ({{Pkg|spice-vdagent}} for Arch guests, [http://www.spice-space.org/download.html Windows guest tools] for Windows guests). Make sure the agent is running (and for future, started automatically). See [[#SPICE]] for the necessary procedure to use QEMU with the SPICE protocol.<br />
<br />
=== Windows-specific notes ===<br />
<br />
QEMU can run any version of Windows from Windows 95 through Windows 10.<br />
<br />
It is possible to run [[Windows PE]] in QEMU.<br />
<br />
==== Fast startup ====<br />
{{Note|An administrator account is required to change power settings.}}<br />
For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following [https://www.tenforums.com/tutorials/4189-turn-off-fast-startup-windows-10-a.html forum page], as it causes the guest to hang during every other boot.<br />
<br />
Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.<br />
<br />
==== Remote Desktop Protocol ====<br />
<br />
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:<br />
<br />
$ qemu-system-x86_64 -nographic -net user,hostfwd=tcp::5555-:3389<br />
<br />
Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:<br />
<br />
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan<br />
<br />
== Troubleshooting ==<br />
<br />
=== Virtual machine runs too slowly ===<br />
<br />
There are a number of techniques that you can use to improve the performance of your virtual machine. For example:<br />
<br />
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU. If you do not do this, it may be trying to emulate a more generic CPU.<br />
* Especially for Windows guests, enable [http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Hyper-V enlightenments]: {{ic|1=-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time}}.<br />
* If the host machine has multiple CPUs, assign the guest more CPUs using the {{ic|-smp}} option.<br />
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.<br />
* Use KVM if possible: add {{ic|1=-machine type=pc,accel=kvm}} to the QEMU start command you use.<br />
* If supported by drivers in the guest operating system, use [http://wiki.libvirt.org/page/Virtio virtio] for network and/or block devices. For example:<br />
$ qemu-system-x86_64 -net nic,model=virtio -net tap,if=tap0,script=no -drive file=''disk_image'',media=disk,if=virtio<br />
* Use TAP devices instead of user-mode networking. See [[#Tap networking with QEMU]].<br />
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.<br />
* If you have a raw disk image, you may want to disable the cache:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio,'''cache=none'''<br />
* Use the native Linux AIO:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio''',aio=native,cache.direct=on'''<br />
* If you use a qcow2 disk image, I/O performance can be improved considerably by ensuring that the L2 cache is of sufficient size. The [https://blogs.igalia.com/berto/2015/12/17/improving-disk-io-performance-in-qemu-2-5-with-the-qcow2-l2-cache/ formula] to calculate L2 cache is: l2_cache_size = disk_size * 8 / cluster_size. Assuming the qcow2 image was created with the default cluster size of 64K, this means that for every 8 GB in size of the qcow2 image, 1 MB of L2 cache is best for performance. Only 1 MB is used by QEMU by default; specifying a larger cache is done on the QEMU command line. For instance, to specify 4 MB of cache (suitable for a 32 GB disk with a cluster size of 64K):<br />
$ qemu-system-x86_64 -drive file=''disk_image'',format=qcow2,l2-cache-size=4M<br />
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]. See [[#Enabling KSM]].<br />
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU with the {{ic|-balloon virtio}} option.<br />
* It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports [[Wikipedia:Native_Command_Queuing|NCQ]], so multiple read or write requests can be outstanding at the same time:<br />
$ qemu-system-x86_64 -drive id=disk,file=''disk_image'',if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0<br />
<br />
See http://www.linux-kvm.org/page/Tuning_KVM for more information.<br />
<br />
=== Mouse cursor is jittery or erratic ===<br />
<br />
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:<br />
<br />
$ export SDL_VIDEO_X11_DGAMOUSE=0<br />
<br />
If this helps, you can add this to your {{ic|~/.bashrc}} file.<br />
<br />
=== No visible Cursor ===<br />
<br />
Add {{ic|-show-cursor}} to QEMU's options to see a mouse cursor.<br />
<br />
If that still does not work, make sure you have set your display device appropriately.<br />
<br />
For example: {{ic|-vga qxl}}<br />
<br />
=== Unable to move/attach Cursor ===<br />
<br />
Replace {{ic|-usbdevice tablet}} with {{ic|-usb}} as QEMU option.<br />
<br />
=== Keyboard seems broken or the arrow keys do not work ===<br />
<br />
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps}}.<br />
<br />
$ qemu-system-x86_64 -k ''keymap'' ''disk_image''<br />
<br />
=== Guest display stretches on window resize ===<br />
<br />
To restore default window size, press {{ic|Ctrl+Alt+u}}.<br />
<br />
=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===<br />
<br />
If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:<br />
<br />
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy<br />
failed to initialize KVM: Device or resource busy<br />
<br />
that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.<br />
<br />
=== libgfapi error message ===<br />
<br />
The error message displayed at startup:<br />
<br />
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory<br />
<br />
[[Install]] {{pkg|glusterfs}} or ignore the error message as GlusterFS is a optional dependency.<br />
<br />
=== Kernel panic on LIVE-environments===<br />
<br />
If you start a live-environment (or better: booting a system) you may encounter this:<br />
<br />
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)<br />
<br />
or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).<br />
Try starting the VM with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.<br />
<br />
=== Windows 7 guest suffers low-quality sound ===<br />
<br />
Using the {{ic|hda}} audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to {{ic|ac97}} by passing the {{ic|-soundhw ac97}} arguments to QEMU and installing the AC97 driver from [http://www.realtek.com.tw/downloads/downloadsView.aspx?Langid=1&PNid=14&PFid=23&Level=4&Conn=3&DownTypeID=3&GetDown=false Realtek AC'97 Audio Codecs] in the guest may solve the problem. See [https://bugzilla.redhat.com/show_bug.cgi?id=1176761#c16 Red Hat Bugzilla – Bug 1176761] for more information.<br />
<br />
=== Could not access KVM kernel module: Permission denied ===<br />
<br />
If you encounter the following error:<br />
<br />
libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied<br />
<br />
Systemd 234 assign it a dynamic id to group kvm (see [https://bugs.archlinux.org/task/54943 bug]). A workground for avoid this error, you need edit the file /etc/libvirt/qemu.conf and change the line:<br />
<br />
group = "78"<br />
<br />
to<br />
<br />
group = "kvm"<br />
<br />
=== Missing performance graphs in virt-manager ===<br />
<br />
[[Install]] {{pkg|python2-cairo}}.<br />
<br />
More information:<br />
https://bugs.archlinux.org/task/54472<br />
https://bbs.archlinux.org/viewtopic.php?id=230319<br />
<br />
== See also ==<br />
<br />
* [http://qemu.org Official QEMU website]<br />
* [http://www.linux-kvm.org Official KVM website]<br />
* [http://qemu.weilnetz.de/qemu-doc.html QEMU Emulator User Documentation]<br />
* [https://en.wikibooks.org/wiki/QEMU QEMU Wikibook]<br />
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)<br />
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy<br />
* [http://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]<br />
* [http://qemu.weilnetz.de/ QEMU on Windows]<br />
* [[wikipedia:Qemu|Wikipedia]]<br />
* [https://wiki.debian.org/QEMU QEMU - Debian Wiki]<br />
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]<br />
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]<br />
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]<br />
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]<br />
* [https://wiki.mikejung.biz/KVM_/_Xen KVM/QEMU Virtio Tuning and SSD VM Optimization Guide]<br />
* [https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/part.virt.qemu.html Managing Virtual Machines with QEMU - OpenSUSE documentation]<br />
* [https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatkvm.htm KVM on IBM Knowledge Center]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=Sshguard&diff=483682Sshguard2017-08-02T18:09:01Z<p>Mouseman: /* FirewallD */ typo</p>
<hr />
<div>[[Category:Secure Shell]]<br />
[[es:Sshguard]]<br />
[[ja:Sshguard]]<br />
{{Related articles start}}<br />
{{Related|fail2ban}}<br />
{{Related|ssh}}<br />
{{Related articles end}}<br />
{{warning|Using an IP blacklist will stop trivial attacks but it relies on an additional daemon and successful logging (the partition containing /var can become full, especially if an attacker is pounding on the server). Additionally, if the attacker knows your IP address, they can send packets with a spoofed source header and get you locked out of the server. [[SSH keys]] provide an elegant solution to the problem of brute forcing without these problems.}}<br />
[http://www.sshguard.net sshguard] is a daemon that protects [[SSH]] and other services against brute-force attacks, similar to [[fail2ban]].<br />
<br />
sshguard is different from the other two in that it is written in C, is lighter and simpler to use with fewer features while performing its core function equally well.<br />
<br />
sshguard is not vulnerable to most (or maybe any) of the log analysis [https://web.archive.org/web/20120625102244/http://www.ossec.net/main/attacking-log-analysis-tools vulnerabilities] that have caused problems for similar tools.<br />
<br />
==Installation==<br />
[[Install]] the {{Pkg|sshguard}} package.<br />
<br />
==Setup==<br />
<br />
sshguard works by monitoring {{ic|/var/log/auth.log}}, syslog-ng or the systemd journal for failed login attempts. For each failed attempt, the offending host is banned from further communication for a limited amount of time. The default amount of time the offender is banned starts at 7 minutes, and doubles each time he or she fails another login. sshguard can be configured to permanently ban a host with too many failed attempts.<br />
<br />
Both temporary and permanent bans are done by adding an entry into the "sshguard" chain in iptables that drops all packets from the offender. The ban is then logged to syslog and ends up in {{ic|/var/log/auth.log}}, or the systemd journal, if systemd is being used. To make the ban only affect port 22, simply do not send packets going to other ports through the "sshguard" chain.<br />
<br />
You must configure a firewall to be used with sshguard in order for blocking to work. <br />
<br />
==== FirewallD ====<br />
<br />
Starting with version 2.0, sshguard can work with Firewalld. Make sure you have firewalld enabled, configured and setup first. To make sshguard write to your zone of preference, issue the following commands:<br />
<br />
# firewallctl zone "<zone name>" --permanent add rich-rule "rule source ipset=sshguard4 drop"<br />
<br />
If you use ipv6, you can issue the same command but substitute sshguard4 with sshguard6. Finish with<br />
# firewall-cmd --reload<br />
<br />
You can verify the above with <br />
# firewall-cmd --info-ipset=sshguard4<br />
<br />
Finally, in /etc/sshguard.conf, find the line for BACKEND and change it as follows<br />
<br />
BACKEND="/usr/lib/sshguard/sshg-fw-firewalld"<br />
<br />
==== UFW ====<br />
<br />
If UFW is installed and enabled, it must be given the ability to pass along DROP control to sshguard. This is accomplished by modifying {{ic|/etc/ufw/before.rules}} to contain the following lines which should be inserted just after the section for loopback devices. {{Note|Users running sshd on a non-standard port should substitute that in the final line above (where 22 is the standard).}}<br />
<br />
{{hc|/etc/ufw/before.rules|<br />
# allow all on loopback<br />
-A ufw-before-input -i lo -j ACCEPT<br />
-A ufw-before-output -o lo -j ACCEPT<br />
<br />
# hand off control for sshd to sshguard<br />
-N sshguard<br />
-A ufw-before-input -p tcp --dport 22 -j sshguard<br />
}}<br />
<br />
[[Restart]] ufw after making this modification.<br />
<br />
==== iptables ====<br />
<br />
{{Note|See [[iptables]] and [[Simple stateful firewall]] first to set up a firewall.}}<br />
<br />
The main configuration required is creating a chain named {{ic|sshguard}}, where sshguard automatically inserts rules to drop packets coming from bad hosts:<br />
# iptables -N sshguard<br />
<br />
Then add a rule to jump to the {{ic|sshguard}} chain from the {{ic|INPUT}} chain. This rule must be added '''before''' any other rules processing the ports that sshguard is protecting. See [http://www.sshguard.net/docs/setup/#netfilter-iptables this example].<br />
# iptables -A INPUT -p tcp --dport 22 -j sshguard<br />
<br />
To save the rules:<br />
# iptables-save > /etc/iptables/iptables.rules<br />
<br />
{{Note|For IPv6, repeat the same steps with ''ip6tables'' and save the rules with ''ip6tables-save'' to {{ic|/etc/iptables/ip6tables.rules}}.}}<br />
<br />
==Usage==<br />
<br />
===systemd===<br />
<br />
[[Enable]] and start the {{ic|sshguard.service}}.<br />
<br />
===syslog-ng===<br />
If you have {{Pkg|syslog-ng}} installed, you may start sshguard directly from the command line instead.<br />
<br />
/usr/sbin/sshguard -l /var/log/auth.log -b /var/db/sshguard/blacklist.db<br />
<br />
==Configuration==<br />
<br />
Configuration is done in {{ic|/etc/sshguard.conf}} which is required for ''sshguard'' to start. A commented example is located at {{ic|/usr/share/doc/sshguard/sshguard.conf.sample}}.<br />
<br />
{{Note|Piped commands and runtime flags in ''sshguards's'' systemd units [https://sourceforge.net/p/sshguard/mailman/message/35709860/ are not supported]. Such flags can be modified in the configuration file.}}<br />
<br />
===Change danger level===<br />
<br />
By default in the Arch-provided configuration file, offenders become permanently banned once they have reached a "danger" level of 120 (or 12 failed logins; see [http://www.sshguard.net/docs/terminology/ terminology] for more details). This behavior can be modified by prepending a danger level to the blacklist file.<br />
<br />
BLACKLIST_FILE=200:/var/db/sshguard/blacklist.db<br />
<br />
The {{ic|200:}} in this example tells sshguard to permanently ban a host after achieving a danger level of 200.<br />
<br />
Finally [[restart]] the {{ic|sshguard.service}} unit.<br />
<br />
===Aggressive banning===<br />
<br />
For some users under constant attack, it may be beneficial to enable a more aggressive banning policy. If you can be reasonably sure that accidental failed logins are unlikely, then you can instruct SSHGuard to automatically ban hosts with a single failed login. Modify the parameters in the configuration file in the following way:<br />
THRESHOLD=10<br />
BLACKLIST_FILE=10:/var/db/sshguard/blacklist.db<br />
<br />
Finally [[restart]] the {{ic|sshguard.service}} unit.<br />
<br />
==Tips and Tricks==<br />
<br />
=== Unbanning ===<br />
<br />
If you ''yourself'' get banned, you can wait to get unbanned automatically or use iptables to unban yourself. First check if your IP is banned by sshguard:<br />
# iptables -L sshguard --line-numbers --numeric<br />
<br />
Then use the following command to unban, with the line-number as identified in the former command: <br />
# iptables -D sshguard <line-number><br />
<br />
You will also need to remove the IP address from {{ic|/var/db/sshguard/blacklist.db}} in order to make unbanning persistent.<br />
# sed -i '/<ip-address>/d' /var/db/sshguard/blacklist.db<br />
<br />
=== Logging ===<br />
<br />
To see what is being passed to sshguard, examine the script in {{ic|/usr/lib/systemd/scripts/sshguard-journalctl}} and the systemd service {{ic|sshguard.service}}. An equivalent command to view the logs in the terminal:<br />
<br />
$ journalctl -afb -p info SYSLOG_FACILITY=4 SYSLOG_FACILITY=10</div>Mousemanhttps://wiki.archlinux.org/index.php?title=Sshguard&diff=483681Sshguard2017-08-02T18:08:17Z<p>Mouseman: /* Setup */ Added firewalld backend configuration</p>
<hr />
<div>[[Category:Secure Shell]]<br />
[[es:Sshguard]]<br />
[[ja:Sshguard]]<br />
{{Related articles start}}<br />
{{Related|fail2ban}}<br />
{{Related|ssh}}<br />
{{Related articles end}}<br />
{{warning|Using an IP blacklist will stop trivial attacks but it relies on an additional daemon and successful logging (the partition containing /var can become full, especially if an attacker is pounding on the server). Additionally, if the attacker knows your IP address, they can send packets with a spoofed source header and get you locked out of the server. [[SSH keys]] provide an elegant solution to the problem of brute forcing without these problems.}}<br />
[http://www.sshguard.net sshguard] is a daemon that protects [[SSH]] and other services against brute-force attacks, similar to [[fail2ban]].<br />
<br />
sshguard is different from the other two in that it is written in C, is lighter and simpler to use with fewer features while performing its core function equally well.<br />
<br />
sshguard is not vulnerable to most (or maybe any) of the log analysis [https://web.archive.org/web/20120625102244/http://www.ossec.net/main/attacking-log-analysis-tools vulnerabilities] that have caused problems for similar tools.<br />
<br />
==Installation==<br />
[[Install]] the {{Pkg|sshguard}} package.<br />
<br />
==Setup==<br />
<br />
sshguard works by monitoring {{ic|/var/log/auth.log}}, syslog-ng or the systemd journal for failed login attempts. For each failed attempt, the offending host is banned from further communication for a limited amount of time. The default amount of time the offender is banned starts at 7 minutes, and doubles each time he or she fails another login. sshguard can be configured to permanently ban a host with too many failed attempts.<br />
<br />
Both temporary and permanent bans are done by adding an entry into the "sshguard" chain in iptables that drops all packets from the offender. The ban is then logged to syslog and ends up in {{ic|/var/log/auth.log}}, or the systemd journal, if systemd is being used. To make the ban only affect port 22, simply do not send packets going to other ports through the "sshguard" chain.<br />
<br />
You must configure a firewall to be used with sshguard in order for blocking to work. <br />
<br />
==== FirewallD ====<br />
<br />
Starting with version 2.0, sshguard can work with Firewalld. Make sure you have firewalld enabled, configured and setup first. To make sshguard write to your zone of preference, issue the following commands:<br />
<br />
# firewallctl zone "<zone name>" --permanent add rich-rule "rule source ipset=sshguard4 drop"<br />
<br />
If you use ipv6, you can issue the same command but substiture sshguard4 with sshguard6. Finish with<br />
# firewall-cmd --reload<br />
<br />
You can verify the above with <br />
# firewall-cmd --info-ipset=sshguard4<br />
<br />
Finally, in /etc/sshguard.conf, find the line for BACKEND and change it as follows<br />
<br />
BACKEND="/usr/lib/sshguard/sshg-fw-firewalld"<br />
<br />
==== UFW ====<br />
<br />
If UFW is installed and enabled, it must be given the ability to pass along DROP control to sshguard. This is accomplished by modifying {{ic|/etc/ufw/before.rules}} to contain the following lines which should be inserted just after the section for loopback devices. {{Note|Users running sshd on a non-standard port should substitute that in the final line above (where 22 is the standard).}}<br />
<br />
{{hc|/etc/ufw/before.rules|<br />
# allow all on loopback<br />
-A ufw-before-input -i lo -j ACCEPT<br />
-A ufw-before-output -o lo -j ACCEPT<br />
<br />
# hand off control for sshd to sshguard<br />
-N sshguard<br />
-A ufw-before-input -p tcp --dport 22 -j sshguard<br />
}}<br />
<br />
[[Restart]] ufw after making this modification.<br />
<br />
==== iptables ====<br />
<br />
{{Note|See [[iptables]] and [[Simple stateful firewall]] first to set up a firewall.}}<br />
<br />
The main configuration required is creating a chain named {{ic|sshguard}}, where sshguard automatically inserts rules to drop packets coming from bad hosts:<br />
# iptables -N sshguard<br />
<br />
Then add a rule to jump to the {{ic|sshguard}} chain from the {{ic|INPUT}} chain. This rule must be added '''before''' any other rules processing the ports that sshguard is protecting. See [http://www.sshguard.net/docs/setup/#netfilter-iptables this example].<br />
# iptables -A INPUT -p tcp --dport 22 -j sshguard<br />
<br />
To save the rules:<br />
# iptables-save > /etc/iptables/iptables.rules<br />
<br />
{{Note|For IPv6, repeat the same steps with ''ip6tables'' and save the rules with ''ip6tables-save'' to {{ic|/etc/iptables/ip6tables.rules}}.}}<br />
<br />
==Usage==<br />
<br />
===systemd===<br />
<br />
[[Enable]] and start the {{ic|sshguard.service}}.<br />
<br />
===syslog-ng===<br />
If you have {{Pkg|syslog-ng}} installed, you may start sshguard directly from the command line instead.<br />
<br />
/usr/sbin/sshguard -l /var/log/auth.log -b /var/db/sshguard/blacklist.db<br />
<br />
==Configuration==<br />
<br />
Configuration is done in {{ic|/etc/sshguard.conf}} which is required for ''sshguard'' to start. A commented example is located at {{ic|/usr/share/doc/sshguard/sshguard.conf.sample}}.<br />
<br />
{{Note|Piped commands and runtime flags in ''sshguards's'' systemd units [https://sourceforge.net/p/sshguard/mailman/message/35709860/ are not supported]. Such flags can be modified in the configuration file.}}<br />
<br />
===Change danger level===<br />
<br />
By default in the Arch-provided configuration file, offenders become permanently banned once they have reached a "danger" level of 120 (or 12 failed logins; see [http://www.sshguard.net/docs/terminology/ terminology] for more details). This behavior can be modified by prepending a danger level to the blacklist file.<br />
<br />
BLACKLIST_FILE=200:/var/db/sshguard/blacklist.db<br />
<br />
The {{ic|200:}} in this example tells sshguard to permanently ban a host after achieving a danger level of 200.<br />
<br />
Finally [[restart]] the {{ic|sshguard.service}} unit.<br />
<br />
===Aggressive banning===<br />
<br />
For some users under constant attack, it may be beneficial to enable a more aggressive banning policy. If you can be reasonably sure that accidental failed logins are unlikely, then you can instruct SSHGuard to automatically ban hosts with a single failed login. Modify the parameters in the configuration file in the following way:<br />
THRESHOLD=10<br />
BLACKLIST_FILE=10:/var/db/sshguard/blacklist.db<br />
<br />
Finally [[restart]] the {{ic|sshguard.service}} unit.<br />
<br />
==Tips and Tricks==<br />
<br />
=== Unbanning ===<br />
<br />
If you ''yourself'' get banned, you can wait to get unbanned automatically or use iptables to unban yourself. First check if your IP is banned by sshguard:<br />
# iptables -L sshguard --line-numbers --numeric<br />
<br />
Then use the following command to unban, with the line-number as identified in the former command: <br />
# iptables -D sshguard <line-number><br />
<br />
You will also need to remove the IP address from {{ic|/var/db/sshguard/blacklist.db}} in order to make unbanning persistent.<br />
# sed -i '/<ip-address>/d' /var/db/sshguard/blacklist.db<br />
<br />
=== Logging ===<br />
<br />
To see what is being passed to sshguard, examine the script in {{ic|/usr/lib/systemd/scripts/sshguard-journalctl}} and the systemd service {{ic|sshguard.service}}. An equivalent command to view the logs in the terminal:<br />
<br />
$ journalctl -afb -p info SYSLOG_FACILITY=4 SYSLOG_FACILITY=10</div>Mousemanhttps://wiki.archlinux.org/index.php?title=RemoteBox&diff=480963RemoteBox2017-07-01T15:58:52Z<p>Mouseman: added external resource</p>
<hr />
<div>[[Category:Virtualization]]<br />
{{Related articles start}}<br />
{{Related|VirtualBox}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|PhpVirtualBox}}<br />
{{Related articles end}}<br />
'''RemoteBox''' is an Open Source VirtualBox Client with Remote Management implementation of the [[VirtualBox]] user interface written in [[Perl]]. In essence, you can remotely administer (ie over the network) an installation of VirtualBox on a server, including its guests and interact with them as if they were running locally. VirtualBox is installed on 'the server' machine and RemoteBox runs on 'the client' machine. RemoteBox provides a complete GTK graphical interface with a look and feel very similar to that of VirtualBox's native GUI. If you're familiar with other virtualization software, such as VMWare ESX, then think of RemoteBox as the "poor man's" VI client.<br />
<br />
== Installation ==<br />
RemoteBox can be installed on the client with the {{AUR|remotebox}} package. It will pull in all the required GTK2 and Perl packages. However, an RDP client is also required, such as FreeRDP or rdesktop and needs to be manually installed. As of this writing, {{AUR|freerdp-git}} 2.0.0.beta1 has been tested and found working.<br />
<br />
=== VirtualBox web service ===<br />
To use RemoteBox, you must have [[VirtualBox]] installed on your server, along with {{AUR|virtualbox-ext-oracle}} package. For a headless server not running a GUI, installing {{AUR|virtualbox-headless}} is suggested. It is also suggested to install {{Pkg|virtualbox-guest-iso}} on your server too.<br />
<br />
On your server running VirtualBox, create a new user with a homedir and login shell, for example:<br />
# useradd -m -g vboxusers -s /bin/bash vbox<br />
<br />
This will create a new user 'vbox' with its primary group as 'vboxusers', a homedir and a login shell. The homedir is required for storing VirtualBox settings and configurations for virtual machines. The shell is required because otherwise, RemoteBox won't be able to login. Now give it a password and record it somewhere safe:<br />
<br />
# passwd vbox<br />
<br />
Create a custom {{Ic|vboxweb-mod.service}} file by copying {{Ic|/usr/lib/systemd/system/vboxweb.service}} to {{Ic|/usr/lib/systemd/system/vboxweb-mod.service}}<br />
<br />
Modify {{Ic|/usr/lib/systemd/system/vboxweb-mod.service}} as follows:<br />
<nowiki> [Unit]<br />
Description=VirtualBox Web Service<br />
After=network.target<br />
<br />
[Service]<br />
Type=forking<br />
PIDFile=/run/vboxweb/vboxweb.pid<br />
ExecStart=/usr/bin/vboxwebsrv --pidfile /run/vboxweb/vboxweb.pid --host <your server ip> --background<br />
User=vbox<br />
Group=vboxusers<br />
<br />
[Install]<br />
WantedBy=multi-user.target</nowiki><br />
<br />
Note: Do not forget to edit <your server ip> with your server's main IP address.<br />
<br />
Create a tmpfile rule for your {{Ic|vboxweb-mod.service}}<br />
# echo "d /run/vboxweb 0755 vbox vboxusers" > /etc/tmpfiles.d/vboxweb-mod.conf<br />
<br />
Manually create the {{Ic|/run/vboxweb}} directory for first start {{Ic|vboxweb-mod.service}}<br />
# mkdir /run/vboxweb<br />
# chown vbox:vboxusers /run/vboxweb<br />
# chmod 755 /run/vboxweb<br />
<br />
You can enable logging by editing the ExecStart line in the unit file above to include the {{Ic|--logfile <logfile location>}} directive. To enable verbose logging, you can also include the {{Ic|--verbose}} directive. Make sure the vbox user can create and write to the logile you are configuring.<br />
<br />
[[Start]]/[[enable]] {{ic|vboxweb-mod.service}}<br />
<br />
== Connecting RemoteBox to the vboxweb service ==<br />
Open RemoteBox and click the {{Ic|Connect}} button. Specify the following:<br />
URL: http:<your server ip>:18083<br />
Username: vbox<br />
Password: <password recorded earlier><br />
<br />
To make it easier to connect during future sessions, after logging in goto {{ic|File|Connection Profiles}} and create a new connection profile.<br />
<br />
== Troubleshooting ==<br />
If you encounter a login problem connecting to the server, first check that the service is running. From the server console, use<br />
# systemctl status vboxweb-mod.service<br />
<br />
It should output that it is running. If not, check logging with {{ic|journalctl}} and, if you configured a {{ic|logfile}}, the vboxweb service logfile for any leads.<br />
<br />
Even on verbose, vboxweb service might not give you any lead as to what is the problem. In that case, you can become {{ic|vbox}} and run {{ic|vboxwebsrv}} from the command line.<br />
<br />
# su vbox<br />
<br />
Then manually start vboxwebsrv:<br />
$ /usr/bin/vboxwebsrv --pidfile /run/vboxweb/vboxweb.pid --host <your server ip><br />
<br />
Omit the {{ic|--background}} and {{ic|--logfile}} directives. If the service starts, the problem could be permissions to the logfile. Leave it running and check if you can connect with RemoteBox from the client.<br />
<br />
If you still can't connect, you can stop the service wiht <ctrl-c> and start it with the {{ic|--background}} directive. Next, check using netstat or something similar whether vboxwebsrv is listening on port 18083. If you see a different port, try connecting your RemoteBox on that port instead.<br />
<br />
Another reason could be a firewall, either on your server, or on your client.<br />
<br />
If you are getting the following error message:<br />
vboxwebsrv: error: failed to initialize COM! hrc=NS_ERROR_FAILURE<br />
<br />
Check that your homedir exists and is writable for user 'vbox'. Also, check the {{ic|$HOME/.config/VirtualBox}} gets created and populated with config files.<br />
<br />
== External Resources ==<br />
* [http://remotebox.knobgoblin.org.uk/ RemoteBox Home Page]<br />
* [http://remotebox.knobgoblin.org.uk/docs/remotebox.pdf RemoteBox Manual]<br />
* [https://sourceforge.net/projects/remotebox/ RemoteBox on Sourceforge]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=VirtualBox&diff=480962VirtualBox2017-07-01T15:56:36Z<p>Mouseman: Added RemoteBox link under Related.</p>
<hr />
<div>[[Category:Hypervisors]]<br />
[[cs:VirtualBox]]<br />
[[de:VirtualBox]]<br />
[[el:VirtualBox]]<br />
[[es:VirtualBox]]<br />
[[fr:VirtualBox]]<br />
[[hu:VirtualBox]]<br />
[[it:VirtualBox]]<br />
[[ja:VirtualBox]]<br />
[[pt:VirtualBox]]<br />
[[ru:VirtualBox]]<br />
[[zh-hans:VirtualBox]]<br />
{{Related articles start}}<br />
{{Related|VirtualBox/Tips and tricks}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|PhpVirtualBox}}<br />
{{Related|RemoteBox}}<br />
{{Related|Moving an existing install into (or out of) a virtual machine}}<br />
{{Related articles end}}<br />
<br />
[https://www.virtualbox.org VirtualBox] is a [[Wikipedia:Hypervisor|hypervisor]] used to run operating systems in a special environment, called a virtual machine, on top of the existing operating system. VirtualBox is in constant development and new features are implemented continuously. It comes with a [[Qt]] GUI interface, as well as headless and [[Wikipedia:Simple DirectMedia Layer|SDL]] command-line tools for managing and running virtual machines.<br />
<br />
In order to integrate functions of the host system to the guests, including shared folders and clipboard, video acceleration and a seamless window integration mode, ''guest additions'' are provided for some guest operating systems.<br />
<br />
== Installation steps for Arch Linux hosts ==<br />
<br />
In order to launch VirtualBox virtual machines on your Arch Linux box, follow these installation steps.<br />
<br />
=== Install the core packages ===<br />
<br />
[[Install]] the {{Pkg|virtualbox}} package. You will need to choose a package to provide host modules:<br />
* for {{Pkg|linux}} kernel choose {{Pkg|virtualbox-host-modules-arch}}<br />
* for other [[kernels]] choose {{Pkg|virtualbox-host-dkms}}<br />
<br />
To compile the VirtualBox modules provided by {{Pkg|virtualbox-host-dkms}}, it will also be necessary to install the appropriate headers package(s) for your installed kernel(s) (e.g. {{Pkg|linux-lts-headers}} for {{Pkg|linux-lts}}). [https://lists.archlinux.org/pipermail/arch-dev-public/2016-March/027808.html] When either VirtualBox or the kernel is updated, the kernel modules will be automatically recompiled thanks to the [[DKMS]] Pacman hook.<br />
<br />
=== Sign modules ===<br />
<br />
When using a custom kernel with {{ic|CONFIG_MODULE_SIG_FORCE}} option enabled, you must sign your modules with a key generated during kernel compilation.<br />
<br />
Navigate to your kernel tree folder and execute the following command:<br />
# for module in `ls /lib/modules/$(uname -r)/kernel/misc/{vboxdrv.ko,vboxnetadp.ko,vboxnetflt.ko,vboxpci.ko}` ; do ./scripts/sign-file sha1 certs/signing_key.pem certs/signing_key.x509 $module ; done<br />
<br />
{{Note|Hashing algorithm does not have to match the one configured, but it must be built into the kernel.}}<br />
<br />
=== Load the VirtualBox kernel modules ===<br />
<br />
Since version 5.0.16, {{Pkg|virtualbox-host-modules-arch}} and {{Pkg|virtualbox-host-dkms}} use {{ic|systemd-modules-load.service}} to load all four VirtualBox modules automatically at boot time. For the modules to be loaded after installation, either reboot or load the modules once manually.<br />
<br />
{{Note|If you do not want the VirtualBox modules to be automatically loaded at boot time, you have to mask the default {{ic|/usr/lib/modules-load.d/virtualbox-host-modules-arch.conf}} (or {{ic|-dkms.conf}}) by creating an empty file (or symlink to {{ic|/dev/null}}) with the same name in {{ic|/etc/modules-load.d}}.}}<br />
<br />
Among the [[kernel modules]] VirtualBox uses, there is a mandatory module named {{ic|vboxdrv}}, which must be loaded before any virtual machines can run.<br />
<br />
To load the module manually, run:<br />
# modprobe vboxdrv<br />
<br />
The following modules are optional but are recommended if you do not want to be bothered in some advanced configurations (precised here after): {{ic|vboxnetadp}}, {{ic|vboxnetflt}} and {{ic|vboxpci}}.<br />
<br />
* {{ic|vboxnetadp}} and {{ic|vboxnetflt}} are both needed when you intend to use the [https://www.virtualbox.org/manual/ch06.html#network_bridged bridged] or [https://www.virtualbox.org/manual/ch06.html#network_hostonly host-only networking] feature. More precisely, {{ic|vboxnetadp}} is needed to create the host interface in the VirtualBox global preferences, and {{ic|vboxnetflt}} is needed to launch a virtual machine using that network interface.<br />
<br />
* {{ic|vboxpci}} is needed when your virtual machine needs to pass through a PCI device on your host.<br />
<br />
{{Note|If the VirtualBox kernel modules were loaded in the kernel while you updated the modules, you need to reload them manually to use the new updated version. To do it, run {{ic|vboxreload}} as root.}}<br />
<br />
Finally, if you use the aforementioned "Host-only" or "bridge networking" feature, make sure {{pkg|net-tools}} is installed. VirtualBox actually uses {{ic|ifconfig}} and {{ic|route}} to assign the IP and route to the host interface configured with {{ic|VBoxManage hostonlyif}} or via the GUI in ''Settings > Network > Host-only Networks > Edit host-only network (space) > Adapter''.<br />
<br />
=== Accessing host USB devices in guest ===<br />
<br />
To use the USB ports of your host machine in your virtual machines, add users that will be authorized to use this feature to the {{ic|vboxusers}} [[group]].<br />
<br />
=== Guest additions disc ===<br />
<br />
It is also recommended to install the {{Pkg|virtualbox-guest-iso}} package on the host running VirtualBox. This package will act as a disc image that can be used to install the guest additions onto guest systems other than Arch Linux. The ''.iso'' file will be located at {{ic|/usr/lib/virtualbox/additions/VBoxGuestAdditions.iso}}, and may have to be mounted manually inside the virtual machine. Once mounted, you can run the guest additions installer inside the guest.<br />
<br />
=== Extension pack ===<br />
<br />
The Oracle Extension Pack provides [https://www.virtualbox.org/manual/ch01.html#intro-installing additional features] and is released under a non-free license '''only available for personal use'''. To install it, the {{aur|virtualbox-ext-oracle}} package is available, and a prebuilt version can be found in the [[Unofficial user repositories#seblu|seblu]] repository.<br />
<br />
If you prefer to use the traditional and manual way: download the extension manually and install it via the GUI (''File > Preferences > Extensions'') or via {{ic|VBoxManage extpack install <.vbox-extpack>}}, make sure you have a toolkit (like [[Polkit]], gksu, etc.) to grant privileged access to VirtualBox. The installation of this extension [https://www.virtualbox.org/ticket/8473 requires root access].<br />
<br />
=== Front-ends ===<br />
<br />
VirtualBox comes with three front-ends:<br />
<br />
* If you want to use VirtualBox with the regular GUI, use {{ic|VirtualBox}}.<br />
* If you want to launch and manage your virtual machines from the command-line, use the {{ic|VBoxSDL}} command, which only provides a plain window for the virtual machine without any overlays.<br />
* If you want to use VirtualBox without running any GUI (e.g. on a server), use the {{ic|VBoxHeadless}} command. With the VRDP extension you can still remotely access the displays of your virtual machines.<br />
<br />
Finally, you can also use [[phpVirtualBox]] to administrate your virtual machines via a web interface.<br />
<br />
Refer to the [https://www.virtualbox.org/manual VirtualBox manual] to learn how to create virtual machines.<br />
<br />
{{Warning|If you intend to store virtual disk images on a [[Btrfs]] file system, before creating any images, you should consider disabling [[Btrfs#Copy-On-Write_.28CoW.29|copy-on-Write]] for the destination directory of these images.}}<br />
<br />
== Installation steps for Arch Linux guests ==<br />
<br />
Boot the Arch installation media through one of the virtual machine's virtual drives. Then, complete the installation of a basic Arch system as explained in the [[Installation guide]].<br />
<br />
=== Installation in EFI mode ===<br />
<br />
If you want to install Arch Linux in EFI mode inside VirtualBox, in the settings of the virtual machine, choose ''System'' item from the panel on the left and ''Motherboard'' tab from the right panel, and check the checkbox ''Enable EFI (special OSes only)''. After selecting the kernel from the Arch Linux installation media's menu, the media will hang for a minute or two and will continue to boot the kernel normally afterwards. Be patient.<br />
<br />
Once the system and the boot loader are installed, VirtualBox will first attempt to run {{ic|/EFI/BOOT/BOOTX64.EFI}} from the [[ESP]]. If that first option fails, VirtualBox will then try the EFI shell script {{ic|startup.nsh}} from the root of the ESP. This means that in order to boot the system you have the following options:<br />
<br />
* [[Unified Extensible Firmware Interface#UEFI Shell|Launch the bootloader manually]] from the EFI shell every time;<br />
* Move the bootloader to the default {{ic|/EFI/BOOT/BOOTX64.EFI}} path;<br />
* Create a script named {{ic|startup.nsh}} at the ESP root containing the path to the boot loader application, e.g. {{ic|\EFI\grub\grubx64.efi}}.<br />
* Boot directly from the ESP partition using a [[EFISTUB#Using a startup.nsh script|startup.nsh script]]. <br />
<br />
Do not bother with the VirtualBox Boot Manager (accessible with {{ic|F2}} at boot), as it is buggy and incomplete. It doesn't store efivars set interactively. Therefore, EFI entries added to it manually in the firmware (accessed with {{ic|F12}} at boot time) or with {{Pkg|efibootmgr}} will persist after a reboot [https://www.virtualbox.org/ticket/11177 but are lost when the VM is shut down].<br />
<br />
See also [https://bbs.archlinux.org/viewtopic.php?id=158003 UEFI VirtualBox installation boot problems].<br />
<br />
=== Install the Guest Additions ===<br />
<br />
VirtualBox [https://www.virtualbox.org/manual/ch04.html Guest Additions] provides drivers and applications that optimize the guest operating system including improved image resolution and better control of the mouse. Within the installed guest system, install:<br />
* {{Pkg|virtualbox-guest-utils}} for VirtualBox Guest utilities with X support<br />
* {{Pkg|virtualbox-guest-utils-nox}} for VirtualBox Guest utilities without X support<br />
<br />
Both packages will make you choose a package to provide guest modules:<br />
* for {{Pkg|linux}} kernel choose {{Pkg|virtualbox-guest-modules-arch}}<br />
* for other [[kernels]] choose {{Pkg|virtualbox-guest-dkms}}<br />
<br />
To compile the virtualbox modules provided by {{Pkg|virtualbox-guest-dkms}}, it will also be necessary to install the appropriate headers package(s) for your installed kernel(s) (e.g. {{Pkg|linux-lts-headers}} for {{Pkg|linux-lts}}). [https://lists.archlinux.org/pipermail/arch-dev-public/2016-March/027808.html] When either VirtualBox or the kernel is updated, the kernel modules will be automatically recompiled thanks to the [[DKMS]] Pacman hook.<br />
<br />
{{Note|<nowiki></nowiki><br />
* You can alternatively install the Guest Additions with the ISO from the {{Pkg|virtualbox-guest-iso}} package, provided you installed this on the host system. To do this, go to the device menu click Insert Guest Additions CD Image.<br />
* To recompile the vbox kernel modules, run {{ic|rcvboxdrv}} as root.<br />
}}<br />
<br />
The guest additions running on your guest, and the VirtualBox application running on your host must have matching versions, otherwise the guest additions (like shared clipboard) may stop working. If you upgrade your guest (e.g. {{ic|pacman -Syu}}), make sure your VirtualBox application on this host is also the latest version. "Check for updates" in the VirtualBox GUI is sometimes not sufficient; check the virtualbox.org website.<br />
<br />
=== Set optimal framebuffer resolution ===<br />
<br />
{{Move|VirtualBox/Tips and tricks}}<br />
Typically after installing Guest Additions, a fullscreen Arch guest running X will be set to the optimal resolution for your display; however, the virtual console's framebuffer will be set to a standard, often smaller, resolution detected from VirtualBox's custom VESA driver.<br />
<br />
To use the virtual consoles at optimal resolution, Arch needs to recognize that resolution as valid, which in turn requires VirtualBox to pass this information along to the guest OS.<br />
<br />
First, check if your desired resolution is not already recognized by running the command:<br />
hwinfo --framebuffer<br />
<br />
If the optimal resolution does not show up, then you will need to run the {{ic|VBoxManage}} tool on the host machine and add "extra resolutions" to your virtual machine (on a Windows host, go to the VirtualBox installation directory to find {{ic|VBoxManage.exe}}). For example:<br />
<br />
VBoxManage setextradata "Arch Linux" "CustomVideoMode1" "1360x768x24"<br />
<br />
The parameters "Arch Linux" and "1360x768x24" in the example above should be replaced with your VM name and the desired framebuffer resolution. Incidentally, this command allows for defining up to 16 extra resolutions ("CustomVideoMode1" through "CustomVideoMode16").<br />
<br />
Afterwards, restart the virtual machine and run {{ic|hwinfo --framebuffer}} once more to verify that the new resolutions have been recognized by your guest system (which does not guarantee they will all work, depending on your hardware limitations).<br />
<br />
Finally, add a {{ic|1=video=''resolution''}} [[kernel parameter]] to set the framebuffer to the new resolution, for example {{ic|1=video=1360x768}}. <br />
<br />
{{Merge|GRUB/Tips_and_tricks#Setting_the_framebuffer_resolution}}<br />
<br />
If you use GRUB as your bootloader, you can edit {{ic|/etc/default/grub}} to include this kernel parameter in the {{ic|GRUB_CMDLINE_LINUX_DEFAULT}} list, like so:<br />
<br />
GRUB_CMDLINE_LINUX_DEFAULT="quiet video=1360x768"<br />
<br />
The GRUB menu itself may also be easily set to optimal resolution, by editing <br />
the {{ic|GRUB_GFXMODE}} option on the same configuration file:<br />
<br />
GRUB_GFXMODE="1360x768x24"<br />
<br />
On a standard Arch setup, you would then run {{ic|grub-mkconfig -o /boot/grub/grub.cfg}} to commit these changes to the bootloader.<br />
<br />
After these steps, the framebuffer resolution should be optimized for the GRUB menu and all virtual consoles.<br />
<br />
{{Note|The GRUB settings {{ic|GRUB_GFXPAYLOAD_LINUX}} and {{ic|vga}} will not fix the framebuffer, since they are overriden by virtue of Kernel Mode Setting, which is mandatory for using X under VirtualBox and only allows for setting the framebuffer resolution by setting the kernel parameter described above.}}<br />
<br />
=== Load the VirtualBox kernel modules ===<br />
<br />
To load the modules automatically, [[enable]] {{ic|vboxservice.service}} which loads the modules and synchronizes the guest's system time with the host.<br />
<br />
To load the modules manually, type:<br />
# modprobe -a vboxguest vboxsf vboxvideo<br />
<br />
Since version 5.0.16, {{Pkg|virtualbox-guest-modules-arch}} and {{Pkg|virtualbox-guest-dkms}} use '''systemd-modules-load''' service to load their modules at boot time.<br />
<br />
{{Note|If you do not want the VirtualBox modules to be loaded at boot time, you have to mask the default {{ic|/usr/lib/modules-load.d/virtualbox-guest-modules-arch.conf}} (or {{ic|-dkms.conf}}) by creating an empty file (or symlink to {{ic|/dev/null}}) with the same name in {{ic|/etc/modules-load.d}}.}}<br />
<br />
=== Launch the VirtualBox guest services ===<br />
<br />
After the rather big installation step dealing with VirtualBox kernel modules, now you need to start the guest services. The guest services are actually just a binary executable called {{ic|VBoxClient}} which will interact with your X Window System. {{ic|VBoxClient}} manages the following features:<br />
* shared clipboard and drag and drop between the host and the guest;<br />
* seamless window mode;<br />
* the guest display is automatically resized according to the size of the guest window;<br />
* checking the VirtualBox host version<br />
<br />
All of these features can be enabled independently with their dedicated flags:<br />
$ VBoxClient --clipboard --draganddrop --seamless --display --checkhostversion<br />
<br />
As a shortcut, the {{ic|VBoxClient-all}} bash script enables all of these features.<br />
<br />
{{Pkg|virtualbox-guest-utils}} installs {{ic|/etc/xdg/autostart/vboxclient.desktop}} that launches {{ic|VBoxClient-all}} on logon. If your [[desktop environment]] or [[window manager]] does not support this scheme, you will need to set up autostarting yourself, see [[Autostarting#Graphical]] for more details.<br />
<br />
VirtualBox can also synchronize the time between the host and the guest, to do this, [[start/enable]] the {{ic|vboxservice.service}}.<br />
<br />
Now, you should have a working Arch Linux guest. Note that features like clipboard sharing are disabled by default in VirtualBox, and you will need to turn them on in the per-VM settings if you actually want to use them (e.g. ''Settings > General > Advanced > Shared Clipboard'').<br />
<br />
=== Hardware acceleration ===<br />
<br />
Hardware acceleration can be activated in the VirtualBox options. The [[GDM]] display manager 3.16+ is known to break hardware acceleration support. [https://bugzilla.gnome.org/show_bug.cgi?id=749390] So if you get issues with hardware acceleration, try out another display manager (lightdm seems to work fine). [https://bbs.archlinux.org/viewtopic.php?id=200025] [https://bbs.archlinux.org/viewtopic.php?pid=1607593#p1607593]<br />
<br />
=== Enable shared folders ===<br />
<br />
Shared folders are managed on the host, in the settings of the Virtual Machine accessible via the GUI of VirtualBox, in the ''Shared Folders'' tab. There, ''Folder Path'', the name of the mount point identified by ''Folder name'', and options like ''Read-only'', ''Auto-mount'' and ''Make permanent'' can be specified. These parameters can be defined with the {{ic|VBoxManage}} command line utility. See [https://www.virtualbox.org/manual/ch04.html#sharedfolders there for more details].<br />
<br />
No matter which method you will use to mount your folder, all methods require some steps first.<br />
<br />
To avoid this issue {{ic|/sbin/mount.vboxsf: mounting failed with the error: No such device}}, make sure the {{ic|vboxsf}} kernel module is properly loaded. It should be, since we enabled all guest kernel modules previously.<br />
<br />
Two additional steps are needed in order for the mount point to be accessible from users other than root:<br />
* the {{Pkg|virtualbox-guest-utils}} package created a group {{ic|vboxsf}} (done in a previous step);<br />
* your username must be in {{ic|vboxsf}} [[group]].<br />
<br />
==== Manual mounting ====<br />
<br />
Use the following command to mount your folder in your Arch Linux guest:<br />
# mount -t vboxsf ''shared_folder_name'' ''mount_point_on_guest_system''<br />
<br />
The vboxsf filesystem offers other options which can be displayed with this command:<br />
# mount.vboxsf<br />
<br />
For example if the user was not in the ''vboxsf'' group, we could have used this command to give access our mountpoint to him:<br />
# mount -t vboxsf -o uid=1000,gid=1000 home /mnt/<br />
<br />
Where ''uid'' and ''gid'' are values corresponding to the users we want to give access to. These values are obtained from the {{ic|id}} command run against this user.<br />
<br />
==== Automounting ====<br />
<br />
{{Note|Automounting requires the {{ic|vboxservice}} to be enabled/started.}}<br />
<br />
In order for the automounting feature to work you must have checked the auto-mount checkbox in the GUI or used the optional {{ic|--automount}} argument with the command {{ic|VBoxManage sharedfolder}}.<br />
<br />
The shared folder should now appear in {{ic|/media/sf_''shared_folder_name''}}. If users in {{ic|media}} cannot access the shared folders, check that {{ic|media}} has permissions 755 or has group ownership {{ic|vboxsf}} if using permission 750. This is currently not the default if media is created by installing the {{ic|virtualbox-guest-utils}}.<br />
<br />
You can use symlinks if you want to have a more convenient access and avoid to browse in that directory, e.g.:<br />
$ ln -s /media/sf_''shared_folder_name'' ~/''my_documents''<br />
<br />
==== Mount at boot ====<br />
<br />
You can mount your directory with [[fstab]]. However, to prevent startup problems with systemd, {{ic|1=comment=systemd.automount}} should be added to {{ic|/etc/fstab}}. This way, the shared folders are mounted only when those mount points are accessed and not during startup. This can avoid some problems, especially if the guest additions are not loaded yet when systemd read fstab and mount the partitions.<br />
''sharedFolderName'' ''/path/to/mntPtOnGuestMachine'' vboxsf uid=''user'',gid=''group'',rw,dmode=700,fmode=600,comment=systemd.automount 0 0<br />
<br />
* {{ic|''sharedFolderName''}}: the value from the VirtualMachine's ''Settings > SharedFolders > Edit > FolderName'' menu. This value can be different from the name of the real folder name on the host machine. To see the VirtualMachine's ''Settings'' go to the host OS VirtualBox application, select the corresponding virtual machine and click on ''Settings''.<br />
* {{ic|''/path/to/mntPtOnGuestMachine''}}: if not existing, this directory should be created manually (for example by using [[Core utilities#mkdir|mkdir]])<br />
* {{ic|dmode}}/{{ic|fmode}} are directory/file permissions for directories/files inside {{ic|''/path/to/mntPtOnGuestMachine''}}.}}<br />
<br />
As of 2012-08-02, mount.vboxsf does not support the ''nofail'' option:<br />
''desktop'' ''/media/desktop'' vboxsf uid=''user'',gid=''group'',rw,dmode=700,fmode=600,nofail 0 0<br />
<br />
=== SSH from host to guest ===<br />
<br />
The network tab of the virtual machine settings contains, in "Advanced", a tool to create port forwarding. <br />
It is possible to use it to forward the Guest ssh port 22 to a Host port, let's say 3022. Then :<br />
<br />
user@host$ ssh -p 3022 $USER@localhost<br />
<br />
will establish a connection from Host to Guest.<br />
<br />
==== SSHFS as alternative to the shared folder ====<br />
<br />
Using this port forwarding and sshfs, it is straightforward to mount the Guest filesystem onto the Host one :<br />
<br />
user@host$ sshfs -p 3022 $USER@localhost:$HOME ~/shared_folder<br />
<br />
and then transfer files between both.<br />
<br />
== Virtual disks management ==<br />
<br />
See also [[VirtualBox/Tips and tricks#Import/export VirtualBox virtual machines from/to other hypervisors]].<br />
<br />
=== Formats supported by VirtualBox ===<br />
<br />
VirtualBox supports the following virtual disk formats:<br />
<br />
* '''VDI''': The Virtual Disk Image is the VirtualBox own open container used by default when you create a virtual machine with VirtualBox.<br />
<br />
* '''VMDK''': The Virtual Machine Disk has been initially developed by VMware for their products. The specification was initially closed source, but it became now an open format which is fully supported by VirtualBox. This format offers the ability to be split into several 2GB files. This feature is specially useful if you want to store the virtual machine on machines which do not support very large files. Other formats, excluding the HDD format from Parallels, do not provide such an equivalent feature.<br />
<br />
* '''VHD''': The Virtual Hard Disk is the format used by Microsoft in Windows Virtual PC and Hyper-V. If you intend to use any of these Microsoft products, you will have to choose this format.<br />
:{{Tip|Since Windows 7, this format can be mounted directly without any additional application.}} <br />
<br />
* '''VHDX''' (read only): This is the eXtended version of the Virtual Hard Disk format developed by Microsoft, which has been released on 2012-09-04 with Hyper-V 3.0 coming with Windows Server 2012. This new version of the disk format does offer enhanced performance (better block alignment), larger blocks size, and journal support which brings power failure resiliency. VirtualBox [https://www.virtualbox.org/manual/ch15.html#idp63002176 should support this format in read only].<br />
<br />
* '''HDD''' (version 2): The HDD format is developed by Parallels Inc and used in their hypervisor solutions like Parallels Desktop for Mac. Newer versions of this format (i.e. 3 and 4) are not supported due to the lack of documentation for this proprietary format. {{Note|There is currently a controversy regarding the support of the version 2 of the format. While the official VirtualBox manual [https://www.virtualbox.org/manual/ch05.html#vdidetails only reports the second version of the HDD file format as supported], Wikipedia's contributors are [[Wikipedia:Comparison of platform virtual machines#Image type compatibility|reporting the first version may work too]]. Help is welcome if you can perform some tests with the first version of the HDD format.}}<br />
<br />
* '''QED''': The QEMU Enhanced Disk format is an old file format for QEMU, another free and open source hypervisor. This format was designed from 2010 in a way to provide a superior alternative to QCOW2 and others. This format features a fully asynchronous I/O path, strong data integrity, backing files, and sparse files. QED format is supported only for compatibility with virtual machines created with old versions of QEMU.<br />
<br />
* '''QCOW''': The QEMU Copy On Write format is the current format for QEMU. The QCOW format does support zlib-based transparent compression and encryption (the latter is flawed and is not recommended). QCOW is available in two versions: QCOW and QCOW2. QCOW2 tends to supersede the first one. QCOW is [https://www.virtualbox.org/manual/ch15.html#idp63002176 currently fully supported by VirtualBox]. QCOW2 comes in two revisions: QCOW2 0.10 and QCOW2 1.1 (which is the default when you create a virtual disk with QEMU). VirtualBox does not support QCOW2.<br />
<br />
* '''OVF''': The Open Virtualization Format is an open format which has been designed for interoperability and distributions of virtual machines between different hypervisors. VirtualBox supports all revisions of this format via the [https://www.virtualbox.org/manual/ch08.html#idp55423424 VBoxManage import/export feature] but with [https://www.virtualbox.org/manual/ch14.html#KnownProblems known limitations].<br />
<br />
* '''RAW''': This is the mode when the virtual disk is exposed directly to the disk without being contained in a specific file format container. VirtualBox supports this feature in several ways: converting RAW disk [https://www.virtualbox.org/manual/ch08.html#idp59139136 to a specific format], or by [https://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevdi cloning a disk to RAW], or by using directly a VMDK file [https://www.virtualbox.org/manual/ch09.html#idp57804112 which points to a physical disk or a simple file].<br />
<br />
=== Disk image format conversion ===<br />
<br />
[https://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevdi VBoxManage clonehd] can be used to convert between VDI, VMDK, VHD and RAW.<br />
<br />
$ VBoxManage clonehd ''inputfile'' ''outputfile'' --format ''outputformat''<br />
<br />
For example to convert VDI to VMDK:<br />
<br />
$ VBoxManage clonehd ''source.vdi'' ''destination.vmdk'' --format VMDK<br />
<br />
==== QCOW ====<br />
<br />
VirtualBox does not support [[QEMU]]'s QCOW2 disk image format. To use a QCOW2 disk image with VirtualBox you therefore need to convert it, which you can do with {{Pkg|qemu}}'s {{ic|qemu-img}} command. {{ic|qemu-img}} can convert QCOW to / from VDI, VMDK, VHDX, RAW and various other formats (which you can see by running {{ic|qemu-img --help}}).<br />
<br />
$ qemu-img convert -O ''output_fmt'' ''inputfile'' ''outputfile''<br />
<br />
For example to convert QCOW2 to VDI:<br />
<br />
$ qemu-img convert -O vdi ''source.qcow2'' ''destination.vdi''<br />
<br />
{{Tip|The {{ic|-p}} parameter is used to get the progression of the conversion task.}}<br />
<br />
There are two revisions of QCOW2: 0.10 and 1.1. You can specify the revision to use with {{ic|1=-o compat=''revision''}}.<br />
<br />
=== Mount virtual disks ===<br />
<br />
==== VDI ====<br />
<br />
Mounting VDI images only works with fixed size images (a.k.a. static images); dynamic (dynamically size allocating) images are not easily mountable.<br />
<br />
The offset of the partition (within the VDI) is needed, then add the value of {{ic|offData}} to {{ic|32256}} (e.g. 69632 + 32256 = 101888):<br />
<br />
$ VBoxManage internalcommands dumphdinfo <storage.vdi> | grep "offData"<br />
<br />
The can now be mounted with:<br />
<br />
# mount -t ext4 -o rw,noatime,noexec,loop,offset=101888 <storage.vdi> /mntpoint/<br />
<br />
You can also use [https://github.com/pld-linux/VirtualBox/blob/master/mount.vdi mount.vdi] script that, which you can use as (install script itself to {{ic|/usr/bin/}}):<br />
<br />
# mount -t vdi -o fstype=ext4,rw,noatime,noexec ''vdi_file_location'' ''/mnt/''<br />
<br />
Alternately you can use {{Pkg|qemu}}'s kernel module that can do this [http://bethesignal.org/blog/2011/01/05/how-to-mount-virtualbox-vdi-image/ attrib]:<br />
<br />
# modprobe nbd max_part=16<br />
# qemu-nbd -c /dev/nbd0 <storage.vdi><br />
# mount /dev/nbd0p1 /mnt/dir/<br />
# # to unmount:<br />
# umount /mnt/dir/<br />
# qemu-nbd -d /dev/nbd0<br />
<br />
If the partition nodes are not propagated try using {{ic|partprobe /dev/nbd0}}; otherwise, a VDI partition can be mapped directly to a node by: {{ic|qemu-nbd -P 1 -c /dev/nbd0 <storage.vdi>}}.<br />
<br />
=== Compact virtual disks ===<br />
<br />
Compacting virtual disks only works with {{ic|.vdi}} files and basically consists of the following steps.<br />
<br />
Boot your virtual machine and remove all bloat manually or by using cleaning tools like {{Pkg|bleachbit}} which is [http://bleachbit.sourceforge.net/download/windows available for Windows systems too].<br />
<br />
Wiping free space with zeroes can be achieved with several tools:<br />
* If you were previously using Bleachbit, check the checkbox ''System > Free disk space'' in the GUI, or use {{ic|bleachbit -c system.free_disk_space}} in CLI;<br />
* On UNIX-based systems, by using {{ic|dd}} or preferably {{Pkg|dcfldd}} (see [http://superuser.com/a/355322 here] to learn the differences) :<br />
:{{bc|1=# dcfldd if=/dev/zero of=''/fillfile'' bs=4M}}<br />
:When {{ic|fillfile}} reaches the limit of the partition, you will get a message like {{ic|1280 blocks (5120Mb) written.dcfldd:: No space left on device}}. This means that all of the user-space and non-reserved blocks of the partition will be filled with zeros. Using this command as root is important to make sure all free blocks have been overwritten. Indeed, by default, when using partitions with ext filesystem, a specified percentage of filesystem blocks is reserved for the super-user (see the {{ic|-m}} argument in the {{ic|mkfs.ext4}} man pages or use {{ic|tune2fs -l}} to see how much space is reserved for root applications).<br />
:When the aforementioned process has completed, you can remove the file {{ic|''fillfile''}} you created.<br />
<br />
* On Windows, there are two tools available:<br />
:*{{ic|sdelete}} from the [http://technet.microsoft.com/en-us/sysinternals/bb842062.aspx Sysinternals Suite], type {{ic|sdelete -s -z ''c:''}}, where you need to reexecute the command for each drive you have in your virtual machine;<br />
:* or, if you love scripts, there is a [http://blog.whatsupduck.net/2012/03/powershell-alternative-to-sdelete.html PowerShell solution], but which still needs to be repeated for all drives.<br />
::{{bc|PS> ./Write-ZeroesToFreeSpace.ps1 -Root ''c:\'' -PercentFree 0}}<br />
::{{Note|This script must be run in a PowerShell environment with administrator privileges. By default, scripts cannot be run, ensure the execution policy is at least on {{ic|RemoteSigned}} and not on {{ic|Restricted}}. This can be checked with {{ic|Get-ExecutionPolicy}} and the required policy can be set with {{ic|Set-ExecutionPolicy RemoteSigned}}.}}<br />
<br />
Once the free disk space have been wiped, shut down your virtual machine.<br />
<br />
The next time you boot your virtual machine, it is recommended to do a filesystem check.<br />
* On UNIX-based systems, you can use {{ic|fsck}} manually;<br />
:* On GNU/Linux systems, and thus on Arch Linux, you can force a disk check at boot [[Fsck#Forcing the check|thanks to a kernel boot parameter]];<br />
* On Windows systems, you can use:<br />
:* either {{ic|chkdsk ''c:'' /F}} where {{ic|''c:''}} needs to be replaced by each disk you need to scan and fix errors;<br />
:* or {{ic|FsckDskAll}} [http://therightstuff.de/2009/02/14/ChkDskAll-ChkDsk-For-All-Drives.aspx from here] which is basically the same software as {{ic|chkdsk}}, but without the need to repeat the command for all drives;<br />
<br />
Now, remove the zeros from the {{ic|vdi}} file with [https://www.virtualbox.org/manual/ch08.html#vboxmanage-modifyvdi VBoxManage modifyhd]:<br />
$ VBoxManage modifyhd ''your_disk.vdi'' --compact<br />
<br />
{{Note|If your virtual machine has snapshots, you need to apply the above command on each {{ic|.vdi}} files you have.}}<br />
<br />
=== Increase virtual disks ===<br />
<br />
==== General procedure ====<br />
<br />
If you are running out of space due to the small hard drive size you selected when you created your virtual machine, the solution adviced by the VirtualBox manual is to use [https://www.virtualbox.org/manual/ch08.html#vboxmanage-modifyvdi VBoxManage modifyhd]. However this command only works for VDI and VHD disks and only for the dynamically allocated variants. If you want to resize a fixed size virtual disk disk too, read on this trick which works either for a Windows or UNIX-like virtual machine.<br />
<br />
First, create a new virtual disk next to the one you want to increase:<br />
$ VBoxManage createhd -filename ''new.vdi'' --size ''10000''<br />
<br />
where size is in MiB, in this example 10000MiB ~= 10GiB, and ''new.vdi'' is name of new hard drive to be created.<br />
<br />
Next, the old virtual disk needs to be cloned to the new one which this may take some time:<br />
$ VBoxManage clonehd ''old.vdi'' ''new.vdi'' --existing<br />
<br />
{{Note|By default, this command uses the ''Standard'' (corresponding to dynamic allocated) file format variant and thus will not use the same file format variant as your source virtual disk. If your ''old.vdi'' has a fixed size and you want to keep this variant, add the parameter {{ic|--variant Fixed}}.}}<br />
<br />
Detach the old hard drive and attach new one, replace all mandatory italic arguments by your own:<br />
$ VBoxManage storageattach ''VM_name'' --storagectl ''SATA'' --port ''0'' --medium none<br />
$ VBoxManage storageattach ''VM_name'' --storagectl ''SATA'' --port ''0'' --medium ''new.vdi'' --type hdd<br />
<br />
To get the storage controller name and the port number, you can use the command {{ic|VBoxManage showvminfo ''VM_name''}}. Among the output you will get such a result (what you are looking for is in italic):<br />
<br />
{{bc|<br />
[...]<br />
Storage Controller Name (0): IDE<br />
Storage Controller Type (0): PIIX4<br />
Storage Controller Instance Number (0): 0<br />
Storage Controller Max Port Count (0): 2<br />
Storage Controller Port Count (0): 2<br />
Storage Controller Bootable (0): on<br />
Storage Controller Name (1): SATA<br />
Storage Controller Type (1): IntelAhci<br />
Storage Controller Instance Number (1): 0<br />
Storage Controller Max Port Count (1): 30<br />
Storage Controller Port Count (1): 1<br />
Storage Controller Bootable (1): on<br />
IDE (1, 0): Empty<br />
''SATA'' (''0'', 0): /home/wget/IT/Virtual_machines/GNU_Linux_distributions/ArchLinux_x64_EFI/Snapshots/{6bb17af7-e8a2-4bbf-baac-fbba05ebd704}.vdi (UUID: 6bb17af7-e8a2-4bbf-baac-fbba05ebd704)<br />
[...]}}<br />
<br />
Download [http://gparted.org/download.php GParted live image] and mount it as a virtual CD/DVD disk file, boot your virtual machine, increase/move your partitions, umount GParted live and reboot.<br />
<br />
{{Note|On GPT disks, increasing the size of the disk will result in the backup GPT header not being at the end of the device. GParted will ask to fix this, click on ''Fix'' both times. On MBR disks, you do not have such a problem as this partition table as no trailer at the end of the disk.}}<br />
<br />
Finally, unregister the virtual disk from VirtualBox and remove the file:<br />
$ VBoxManage closemedium disk ''old.vdi''<br />
$ rm ''old.vdi''<br />
<br />
==== Increasing the size of VDI disks ====<br />
If your disk is a VDI one, run:<br />
<br />
$ VBoxManage modifyhd ''your_virtual_disk.vdi'' --resize ''the_new_size''<br />
<br />
Then jump back to the Gparted step, to increase the size of the partition on the virtual disk.<br />
<br />
=== Replace a virtual disk manually from the .vbox file ===<br />
<br />
If you think that editing a simple ''XML'' file is more convenient than playing with the GUI or with {{ic|VBoxManage}} and you want to replace (or add) a virtual disk to your virtual machine, in the ''.vbox'' configuration file corresponding to your virtual machine, simply replace the GUID, the file location and the format to your needs:<br />
<br />
{{hc|ArchLinux_vm.vbox|2=<br />
<HardDisk uuid="''{670157e5-8bd4-4f7b-8b96-9ee412a712b5}''" location="''ArchLinux_vm.vdi''" format="''VDI''" type="Normal"/><br />
}}<br />
<br />
then in the {{ic|<AttachedDevice>}} sub-tag of {{ic|<StorageController>}}, replace the GUID by the new one.<br />
<br />
{{hc|ArchLinux_vm.vbox|2=<br />
<AttachedDevice type="HardDisk" port="0" device="0"><br />
<Image uuid="''{670157e5-8bd4-4f7b-8b96-9ee412a712b5}''"/><br />
</AttachedDevice><br />
}}<br />
<br />
{{Note|If you do not know the GUID of the drive you want to add, you can use the {{ic|VBoxManage showhdinfo ''file''}}. If you previously used {{ic|VBoxManage clonehd}} to copy/convert your virtual disk, this command should have outputted the GUID just after the copy/conversion completed. Using a random GUID does not work, as each [http://www.virtualbox.org/manual/ch05.html#cloningvdis UUID is stored inside each disk image].}}<br />
<br />
==== Transfer between Linux host and other OS ====<br />
<br />
The information about path to harddisks and the snapshots is stored between {{ic|<HardDisks> .... </HardDisks>}} tags in the file with the ''.vbox'' extension. You can edit them manually or use this script where you will need change only the path or use defaults, assumed that ''.vbox'' is in the same directory with a virtual harddisk and the snapshots folder. It will print out new configuration to stdout.<br />
<br />
{{bc|1=<br />
#!/bin/bash<br />
NewPath="${PWD}/"<br />
Snapshots="Snapshots/"<br />
Filename="$1"<br />
<br />
awk -v SetPath="$NewPath" -v SnapPath="$Snapshots" '{if(index($0,"<HardDisk uuid=") != 0){A=$3;split(A,B,"=");<br />
L=B[2];<br />
gsub(/\"/,"",L);<br />
sub(/^.*\//,"",L);<br />
sub(/^.*\\/,"",L);<br />
if(index($3,"{") != 0){SnapS=SnapPath}else{SnapS=""};<br />
print $1" "$2" location="\"SetPath SnapS L"\" "$4" "$5}<br />
else print $0}' "$Filename"}}<br />
<br />
{{Note|<br />
* If you will prepare virtual machine for use in Windows host then in the path name end you should use backslash \ instead of / .<br />
* The script detects snapshots by looking for {{ic|{}} in the file name.<br />
* To make it run on a new host you will need to add it first to the register by clicking on '''Machine -> Add...''' or use hotkeys Ctrl+A and then browse to ''.vbox'' file that contains configuration or use command line {{ic|VBoxManage registervm ''filename''.vbox}}}}<br />
<br />
=== Clone a virtual disk and assigning a new UUID to it ===<br />
<br />
UUIDs are widely used by VirtualBox. Each virtual machines and each virtual disk of a virtual machine must have a different UUID. When you launch a virtual machine in VirtualBox, VirtualBox will keep track of all UUIDs of your virtual machine instance. See the [http://www.virtualbox.org/manual/ch08.html#vboxmanage-list VBoxManage list] to list the items registered with VirtualBox.<br />
<br />
If you cloned a virtual disk manually by copying the virtual disk file, you will need to assign a new UUID to the cloned virtual drive if you want to use the disk in the same virtual machine or even in another (if that one has already been opened, and thus registered, with VirtualBox).<br />
<br />
You can use this command to assign a new UUID to a virtual disk: <br />
$ VBoxManage internalcommands sethduuid ''/path/to/disk.vdi''<br />
<br />
{{Tip|To avoid copying the virtual disk and assigning a new UUID to your file manually you can use [http://www.virtualbox.org/manual/ch08.html#vboxmanage-clonevdi VBoxManage clonehd].}}<br />
<br />
{{Note|The commands above support all [[#Formats supported by VirtualBox|virtual disk formats supported by VirtualBox]].}}<br />
<br />
== Tips and tricks ==<br />
<br />
For advanced configuration, see [[VirtualBox/Tips and tricks]].<br />
<br />
== Troubleshooting ==<br />
<br />
=== Keyboard and mouse are locked into virtual machine ===<br />
<br />
This means your virtual machine has captured the input of your keyboard and your mouse. Simply press the right {{ic|Ctrl}} key and your input should control your host again.<br />
<br />
To control transparently your virtual machine with your mouse going back and forth your host, without having to press any key, and thus have a seamless integration, install the guest additions inside the guest. Read from the [[#Install the Guest Additions]] step if you guest is Arch Linux, otherwise read the official VirtualBox help.<br />
<br />
=== No 64-bit OS client options ===<br />
<br />
When launching a VM client, and no 64-bit options are available, make sure your CPU virtualization capabilities (usually named {{ic|VT-x}}) are enabled in the BIOS.<br />
<br />
If you are using a Windows host, you may need to disable Hyper-V, as it prevents VirtualBox from using VT-x. [https://www.virtualbox.org/ticket/12350]<br />
<br />
=== VirtualBox GUI does not match host GTK theme ===<br />
<br />
See [[Uniform look for Qt and GTK applications]] for information about theming Qt-based applications like VirtualBox.<br />
<br />
=== Cannot send Ctrl+Alt+Fn to guest ===<br />
<br />
Your guest operating system is a GNU/Linux distribution and you want to open a new TTY shell by hitting {{ic|Ctrl+Alt+F2}} or exit your current X session with {{ic|Ctrl+Alt+Backspace}}. If you type these keyboard shortcuts without any adaptation, the guest will not receive any input and the host (if it is a GNU/Linux distribution too) will intercept these shortcut keys. To send {{ic|Ctrl+Alt+F2}} to the guest for example, simply hit your ''Host Key'' (usually the right {{ic|Ctrl}} key) and press {{ic|F2}} simultaneously.<br />
<br />
=== USB subsystem not working ===<br />
<br />
Your user must be in the {{ic|vboxusers}} group and you need to install the [[#Extension pack|extension pack]] if you want USB 2 support. Then you will be able to enable USB 2 in the VM settings and add one or several filters for the devices you want to access from the guest OS.<br />
<br />
If {{ic|VBoxManage list usbhost}} does not show any USB devices even if run as root, make sure that there is no old udev rules (from VirtualBox 4.x) in ''/etc/udev/rules.d/''. VirtualBox 5.0 installs udev rules to ''/usr/lib/udev/rules.d/''. You can use command like {{ic|pacman -Qo /usr/lib/udev/rules.d/60-vboxdrv.rules}} to determine if the udev rule file is outdated.<br />
<br />
Sometimes, on old Linux hosts, the USB subsystem is not auto-detected resulting in an error {{ic|Could not load the Host USB Proxy service: VERR_NOT_FOUND}} or in a not visible USB drive on the host, [https://bbs.archlinux.org/viewtopic.php?id=121377 even when the user is in the '''vboxusers''' group]. This problem is due to the fact that VirtualBox switched from ''usbfs'' to ''sysfs'' in version 3.0.8. If the host does not understand this change, you can revert to the old behaviour by defining the following environment variable in any file that is sourced by your shell (e.g. your {{ic|~/.bashrc}} if you are using ''bash''):<br />
<br />
{{hc|~/.bashrc|VBOX_USB<nowiki>=</nowiki>usbfs}}<br />
<br />
Then make sure, the environment has been made aware of this change (reconnect, source the file manually, launch a new shell instance or reboot).<br />
<br />
Also make sure that your user is a member of the {{ic|storage}} group.<br />
<br />
=== USB modem not working on host ===<br />
<br />
If you have a USB modem which is being used by the guest OS, killing the guest OS can cause the modem to become unusable by the host system. Killing and restarting {{ic|VBoxSVC}} should fix this problem.<br />
<br />
=== Access serial port from guest ===<br />
<br />
Check you permission for the serial port:<br />
$ /bin/ls -l /dev/ttyS*<br />
crw-rw---- 1 root uucp 4, 64 Feb 3 09:12 /dev/ttyS0<br />
crw-rw---- 1 root uucp 4, 65 Feb 3 09:12 /dev/ttyS1<br />
crw-rw---- 1 root uucp 4, 66 Feb 3 09:12 /dev/ttyS2<br />
crw-rw---- 1 root uucp 4, 67 Feb 3 09:12 /dev/ttyS3<br />
<br />
Add your user to the {{ic|uucp}} [[group]].<br />
<br />
=== Guest freezes after starting Xorg ===<br />
<br />
Faulty or missing drivers may cause the guest to freeze after starting Xorg, see for example [https://bbs.archlinux.org/viewtopic.php?pid=1167838] and [https://bbs.archlinux.org/viewtopic.php?id=156079]. Try disabling 3D acceleration in ''Settings > Display'', and check if all [[Xorg]] drivers are installed.<br />
<br />
=== Fullscreen mode shows blank screen ===<br />
On some window managers ([[i3]], [[awesome]]), VirtualBox has issues with fullscreen mode properly due to the overlay bar. To work around this issue, disable "Show in Full-screen/Seamless" option in "Guest Settings > User Interface > Mini ToolBar". See the [https://www.virtualbox.org/ticket/14323 upstream bug report] for more information.<br />
<br />
=== Host freezes on virtual machine start ===<br />
<br />
{{Expansion|Needs a link to a bug report.}}<br />
<br />
Possible causes/solutions :<br />
* SMAP<br />
This is a known incompatiblity with SMAP enabled kernels affecting (mostly) Intel Broadwell chipsets. A solution to this problem is disabling SMAP support in your kernel by appending the {{ic|nosmap}} option to your [[kernel parameters]].<br />
* Hardware Virtualisation<br />
Disabling hardware virtualisation (VT-x/AMD-V) may solve the problem.<br />
* Various Kernel bugs<br />
** Fuse mounted partitions (like ntfs) [https://bbs.archlinux.org/viewtopic.php?id=185841], [https://bugzilla.kernel.org/show_bug.cgi?id=82951#c12]<br />
<br />
Generally, such issues are observed after upgrading VirtualBox or linux kernel. Downgrading them to the previous versions of theirs might solve the problem.<br />
<br />
=== Linux guests have slow/distorted audio ===<br />
<br />
The AC97 audio driver within the Linux kernel occasionally guesses the wrong clock settings when running inside Virtual Box, leading to audio that is either too slow or too fast. To fix this, create a file in {{ic|/etc/modprobe.d}} with the following line:<br />
<br />
options snd-intel8x0 ac97_clock=48000<br />
<br />
=== Analog microphone not working ===<br />
<br />
If the audio input from an analog microphone is working correctly on the host, but no sound seems to get through to the guest, despite the microphone device apparently being detected normally, installing a [[Sound system#Sound servers|sound server]] such as [[PulseAudio]] on the host might fix the problem.<br />
<br />
If after installing [[PulseAudio]] the microphone still refuses to work, setting ''Host Audio Driver'' (under ''VirtualBox > Machine > Settings > Audio'') to ''ALSA Audio Driver'' might help.<br />
<br />
=== Microphone not working after upgrade ===<br />
<br />
There have been issues reported around sound input in 5.1.x versions. [https://forums.virtualbox.org/viewtopic.php?f=7&t=78797]<br />
<br />
[[Downgrading]] may solve the problem. You can use {{aur|virtualbox-bin-5.0}} to ease downgrading.<br />
<br />
=== Problems with images converted to ISO ===<br />
<br />
Some image formats cannot be reliably converted to ISO. For instance, {{Pkg|ccd2iso}} ignores .ccd and .sub files, which can result in disk images with broken files. <br />
<br />
In this case, you will either have to use [[CDemu]] for Linux inside VirtualBox or any other utility used to mount disk images.<br />
<br />
=== Failed to create the host-only network interface ===<br />
<br />
Make sure all required kernel modules are loaded. See [[#Load the VirtualBox kernel modules]].<br />
<br />
=== Failed to insert module ===<br />
<br />
When you get the following error when trying to load modules:<br />
<br />
Failed to insert 'vboxdrv': Required key not available<br />
<br />
[[#Sign_modules|Sign]] your modules or disable {{ic|CONFIG_MODULE_SIG_FORCE}} in your kernel config.<br />
<br />
=== VBOX_E_INVALID_OBJECT_STATE (0x80BB0007) ===<br />
<br />
This can occur if a VM is exited ungracefully. Run the following command:<br />
$ VBoxManage controlvm ''virtual_machine_name'' poweroff<br />
<br />
=== NS_ERROR_FAILURE and missing menu items ===<br />
<br />
This happens sometimes when selecting ''QCOW''/''QCOW2''/''QED'' disk format when creating a new virtual disk.<br />
<br />
If you encounter this message the first time you start the virtual machine:<br />
<br />
{{bc|Failed to open a session for the virtual machine debian.<br />
Could not open the medium '/home/.../VirtualBox VMs/debian/debian.qcow'.<br />
QCow: Reading the L1 table for image '/home/.../VirtualBox VMs/debian/debian.qcow' failed (VERR_EOF).<br />
VD: error VERR_EOF opening image file '/home/.../VirtualBox VMs/debian/debian.qcow' (VERR_EOF).<br />
<br />
Result Code: <br />
NS_ERROR_FAILURE (0x80004005)<br />
Component: <br />
Medium<br />
}}<br />
<br />
Exit VirtualBox, delete all files of the new machine and from virtualbox config file remove the last line in {{ic|MachineRegistry}} menu (or the offending machine you are creating):<br />
<br />
{{hc|~/.config/VirtualBox/VirtualBox.xml|2=<br />
...<br />
<MachineRegistry><br />
<MachineEntry uuid="{00000000-0000-0000-0000-000000000000}" src="/home/void/VirtualBox VMs/debian/debian.vbox"/><br />
<MachineEntry uuid="{00000000-0000-0000-0000-000000000000}" src="/home/void/VirtualBox VMs/ubuntu/ubuntu.vbox"/><br />
<strike><MachineEntry uuid="{00000000-0000-0000-0000-000000000000}" src="/home/void/VirtualBox VMs/lastvmcausingproblems/lastvmcausingproblems.qcow"/></strike><br />
</MachineRegistry><br />
...<br />
}}<br />
<br />
=== Arch: pacstrap script fails ===<br />
<br />
If you used ''pacstrap'' in the [[#Installation steps for Arch Linux guests]] to also [[#Install the Guest Additions]] '''before''' performing a first boot into the new guest, you will need to {{ic|umount -l /mnt/dev}} as root before using ''pacstrap'' again; a failure to do this will render it unusable.<br />
<br />
=== OpenBSD unusable when virtualisation instructions unavailable ===<br />
<br />
While OpenBSD is reported to work fine on other hypervisors without virtualisation instructions (VT-x AMD-V) enabled, an OpenBSD virtual machine running on VirtualBox without these instructions will be unusable, manifesting with a bunch of segmentation faults. Starting VirtualBox with the ''-norawr0'' argument [https://www.virtualbox.org/ticket/3947 may solve the problem]. You can do it like this:<br />
$ VBoxSDL -norawr0 -vm ''name_of_OpenBSD_VM''<br />
<br />
=== Windows host: VERR_ACCESS_DENIED ===<br />
<br />
To access the raw VMDK image on a Windows host, run the VirtualBox GUI as administrator.<br />
<br />
=== Windows: "The specified path does not exist. Check the path and then try again." ===<br />
<br />
This error message often appears when running an .exe file which requires administrator priviliges from a shared folder in windows guests. See [https://www.virtualbox.org/ticket/5732 the bug report] for details. You are able to run such exe files after you copy them to virtual drive.<br />
<br />
Other threads on the internet suggest to add VBOXSVR to the list of trusted sites, but this does not work with Windows 7 or newer.<br />
<br />
=== Windows 8.x error code 0x000000C4===<br />
<br />
If you get this error code while booting, even if you choose OS Type Win 8, try to enable the {{ic|CMPXCHG16B}} CPU instruction:<br />
<br />
$ vboxmanage setextradata ''virtual_machine_name'' VBoxInternal/CPUM/CMPXCHG16B 1<br />
<br />
=== Windows 8, 8.1 or 10 fails to install, boot or has error "ERR_DISK_FULL" ===<br />
Update the VM's settings by going to ''Settings > Storage > Controller:SATA'' and check "Use Host I/O Cache".<br />
<br />
=== WinXP: Bit-depth cannot be greater than 16 ===<br />
<br />
If you are running at 16-bit color depth, then the icons may appear fuzzy/choppy. However, upon attempting to change the color depth to a higher level, the system may restrict you to a lower resolution or simply not enable you to change the depth at all. To fix this, run {{ic|regedit}} in Windows and add the following key to the Windows XP VM's registry:<br />
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services]<br />
"ColorDepth"=dword:00000004<br />
<br />
Then update the color depth in the "desktop properties" window. If nothing happens, force the screen to redraw through some method (i.e. {{ic|Host+f}} to redraw/enter full screen).<br />
<br />
== See also ==<br />
<br />
* [https://www.virtualbox.org/manual/UserManual.html VirtualBox User Manual]<br />
* [[Wikipedia:VirtualBox]]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=PhpVirtualBox&diff=480961PhpVirtualBox2017-07-01T15:55:53Z<p>Mouseman: Added 'related' box</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[ja:PhpVirtualBox]]<br />
[[ru:PhpVirtualBox]]<br />
{{Related articles start}}<br />
{{Related|VirtualBox}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|RemoteBox}}<br />
{{Related articles end}}<br />
'''phpVirtualBox''' is an open source, AJAX implementation of the [[VirtualBox]] user interface written in [[PHP]]. As a modern web interface, it allows you to access and control remote VirtualBox instances. Much of its verbage and some of its code is based on the (inactive) vboxweb project. phpVirtualBox was designed to allow users to administer VirtualBox in a headless environment - mirroring the VirtualBox GUI through its web interface. <br />
<br />
== Installation ==<br />
To remotely control virtual machine you need two components: VirtualBox web service, running in the same OS with virtual machine, and web interface, written in PHP and therefore dependent on PHP-capable web server. Communication between them, based on [[Wikipedia:SOAP|SOAP]] protocol is currently unencrypted, so it is recommended to install both on the same machine if you do not want your username and password to be send via network as clear text.<br />
<br />
=== VirtualBox web service ===<br />
To use the web console, you must install the {{AUR|virtualbox-ext-oracle}} package.<br />
<br />
=== VirtualBox web interface (phpvirtualbox) ===<br />
[[Install]] the {{Pkg|phpvirtualbox}} package. You will also need a PHP-capable web server of your choice ([[Apache]] is suitable choice).<br />
<br />
== Configuration ==<br />
''From here on out, it is assumed that you have a web server (with root at {{Ic|/srv/http}}) and php functioning properly.''<br />
<br />
=== Web service ===<br />
In the virtual machine settings, enable the remote desktop access and specify a port different with other virtual machines.<br />
<br />
Every time you need to make machine remotely available execute something like this:<br />
<br />
vboxwebsrv -b --logfile '''path to log file''' --pidfile /run/vbox/vboxwebsrv.pid --host 127.0.0.1<br />
<br />
As user whose account you want the service to be running from ({{Ic|--host}} option is not necessary if you enabled association with '''localhost''' in the {{Ic|/etc/host.conf}}).<br />
<br />
{{Note|This user must be in group '''vboxusers'''!}}<br />
<br />
{{Pkg|virtualbox}} is available in the community and it contains the {{Ic|vboxweb.service}} for [[systemd]]. <br />
<br />
To start {{Ic|vboxweb}} from '''non-root user''' you must: <br />
<br />
1. Create or add a user in the group {{Ic|vboxusers}} (for example, {{Ic|vbox}})<br />
<br />
2. Create your custom {{Ic|vboxweb_mod.service}} file by copying {{Ic|/usr/lib/systemd/system/vboxweb.service}} to {{Ic|/etc/systemd/system/vboxweb_mod.service}}<br />
<br />
3. Modify {{Ic|/etc/systemd/system/vboxweb_mod.service}} like this:<br />
<nowiki> [Unit]<br />
Description=VirtualBox Web Service<br />
After=network.target<br />
<br />
[Service]<br />
Type=forking<br />
PIDFile=/run/vboxweb/vboxweb.pid<br />
ExecStart=/usr/bin/vboxwebsrv --pidfile /run/vboxweb/vboxweb.pid --background<br />
User=vbox<br />
Group=vboxusers<br />
<br />
[Install]<br />
WantedBy=multi-user.target</nowiki><br />
<br />
4. Create tmpfile rule for your {{Ic|vboxweb_mod.service}}<br />
# echo "d /run/vboxweb 0755 vbox vboxusers" > /etc/tmpfiles.d/vboxweb_mod.conf<br />
<br />
5. Create manually {{Ic|/run/vboxweb}} directory for first start {{Ic|vboxweb_mod.service}}<br />
# mkdir /run/vboxweb<br />
# chown vbox:vboxusers /run/vboxweb<br />
# chmod 755 /run/vboxweb<br />
or just reboot your system for automatically create.<br />
<br />
6. [[Start]]/[[enable]] {{ic|vboxweb_mod.service}}<br />
<br />
=== Web interface ===<br />
Edit {{Ic|/etc/php/php.ini}}, make sure the following line is uncommented.<br />
extension=soap.so<br />
<br />
Edit the example configuration file {{Ic|/usr/share/webapps/phpvirtualbox/config.php-example}} appropriately (it is well-commented and does not need explanations). Copy that file into {{Ic|/etc/webapps/phpvirtualbox/config.php}} and symlink to {{Ic|/usr/share/webapps/phpvirtualbox/config.php}}.<br />
<br />
Then, edit {{Ic|/etc/php/php.ini}}, find {{Ic|open_basedir}} and append the configuration path {{Ic|/etc/webapps/}} at the end. It will look like the follows:<br />
<br />
open_basedir = /srv/http/:/home/:/tmp/:/usr/share/pear/:/usr/share/webapps/:/etc/webapps/<br />
<br />
If you are running Apache as webserver, you can copy {{Ic|/etc/webapps/phpvirtualbox/apache.example.conf}} into {{Ic|/etc/httpd/conf/extra/phpvirtualbox.conf}}. If you are running Apache 2.4, due to [http://httpd.apache.org/docs/2.4/upgrading.html#run-time the syntax of ACL changes], edit that file to replace the follows<br />
<br />
Order allow,deny<br />
Allow from all<br />
<br />
to:<br />
<br />
Require all granted<br />
<br />
Next, add following line into {{Ic|/etc/httpd/conf/httpd.conf}}:<br />
<br />
Include conf/extra/phpvirtualbox.conf<br />
<br />
Edit {{Ic|/etc/webapps/phpvirtualbox/.htaccess}} and remove the following line.<br />
<br />
deny from all<br />
<br />
Do not forget to restart the webserver (e.g. for Apache, [[restart]] {{ic|httpd.service}}).<br />
<br />
== Running ==<br />
If everything works fine, visit http://'''YourVboxWebInterfaceHost'''/phpvirtualbox and it should show a login box. The initial username and password are both '''"admin"''', after login change them from the web interface (File -> change password). If you set {{Ic|1=$noAuth=true}} in the web interface {{Ic|config.php}}, you should immediately see the phpvirtualbox web interface.<br />
<br />
== Debugging ==<br />
If you encounter a login problem, and you have upgraded virtualbox from 3.2.x to 4.0.x, you should run the following command to update you websrvauthlibrary in you virtualbox configuration file which has been changed from {{Ic|VRDPAuth.so}} to {{Ic|VBOXAuth.so}}.<br />
<br />
VBoxManage setproperty vrdeauthlibrary default<br />
VBoxManage setproperty websrvauthlibrary default <br />
<br />
If you are still unable to login into the interface, you can try to disable webauth by<br />
<br />
VBoxManage setproperty websrvauthlibrary null<br />
<br />
on virtualization server and set username and password to empty strings and set $noAuth=true in {{Ic|/etc/webapps/phpvirtualbox/config.php}} on web server. By doing this, you should immediatelly access the web interface without login process. And then, maybe you can try some apache access control.<br />
<br />
== External Resources ==<br />
* [http://sourceforge.net/projects/phpvirtualbox/ PHPVirtualBox Home Page]<br />
* [http://www.torrent-invites.com/software/101718-manage-your-virtualbox-vms-via-web-phpvirtualbox.html Manage your VirtualBox VMs via the web with phpVirtualBox]<br />
* [https://bbs.archlinux.org/viewtopic.php?id=147175 systemd vboxweb.service mod when needing to start as non-root user]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=Remotebox&diff=480960Remotebox2017-07-01T15:54:53Z<p>Mouseman: Mouseman moved page Remotebox to RemoteBox: typo</p>
<hr />
<div>#REDIRECT [[RemoteBox]]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=RemoteBox&diff=480959RemoteBox2017-07-01T15:54:53Z<p>Mouseman: Mouseman moved page Remotebox to RemoteBox: typo</p>
<hr />
<div>[[Category:Virtualization]]<br />
{{Related articles start}}<br />
{{Related|VirtualBox}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|PhpVirtualBox}}<br />
{{Related articles end}}<br />
'''RemoteBox''' is an Open Source VirtualBox Client with Remote Management implementation of the [[VirtualBox]] user interface written in [[Perl]]. In essence, you can remotely administer (ie over the network) an installation of VirtualBox on a server, including its guests and interact with them as if they were running locally. VirtualBox is installed on 'the server' machine and RemoteBox runs on 'the client' machine. RemoteBox provides a complete GTK graphical interface with a look and feel very similar to that of VirtualBox's native GUI. If you're familiar with other virtualization software, such as VMWare ESX, then think of RemoteBox as the "poor man's" VI client.<br />
<br />
== Installation ==<br />
RemoteBox can be installed on the client with the {{AUR|remotebox}} package. It will pull in all the required GTK2 and Perl packages. However, an RDP client is also required, such as FreeRDP or rdesktop and needs to be manually installed. As of this writing, {{AUR|freerdp-git}} 2.0.0.beta1 has been tested and found working.<br />
<br />
=== VirtualBox web service ===<br />
To use RemoteBox, you must have [[VirtualBox]] installed on your server, along with {{AUR|virtualbox-ext-oracle}} package. For a headless server not running a GUI, installing {{AUR|virtualbox-headless}} is suggested. It is also suggested to install {{Pkg|virtualbox-guest-iso}} on your server too.<br />
<br />
On your server running VirtualBox, create a new user with a homedir and login shell, for example:<br />
# useradd -m -g vboxusers -s /bin/bash vbox<br />
<br />
This will create a new user 'vbox' with its primary group as 'vboxusers', a homedir and a login shell. The homedir is required for storing VirtualBox settings and configurations for virtual machines. The shell is required because otherwise, RemoteBox won't be able to login. Now give it a password and record it somewhere safe:<br />
<br />
# passwd vbox<br />
<br />
Create a custom {{Ic|vboxweb-mod.service}} file by copying {{Ic|/usr/lib/systemd/system/vboxweb.service}} to {{Ic|/usr/lib/systemd/system/vboxweb-mod.service}}<br />
<br />
Modify {{Ic|/usr/lib/systemd/system/vboxweb-mod.service}} as follows:<br />
<nowiki> [Unit]<br />
Description=VirtualBox Web Service<br />
After=network.target<br />
<br />
[Service]<br />
Type=forking<br />
PIDFile=/run/vboxweb/vboxweb.pid<br />
ExecStart=/usr/bin/vboxwebsrv --pidfile /run/vboxweb/vboxweb.pid --host <your server ip> --background<br />
User=vbox<br />
Group=vboxusers<br />
<br />
[Install]<br />
WantedBy=multi-user.target</nowiki><br />
<br />
Note: Do not forget to edit <your server ip> with your server's main IP address.<br />
<br />
Create a tmpfile rule for your {{Ic|vboxweb-mod.service}}<br />
# echo "d /run/vboxweb 0755 vbox vboxusers" > /etc/tmpfiles.d/vboxweb-mod.conf<br />
<br />
Manually create the {{Ic|/run/vboxweb}} directory for first start {{Ic|vboxweb-mod.service}}<br />
# mkdir /run/vboxweb<br />
# chown vbox:vboxusers /run/vboxweb<br />
# chmod 755 /run/vboxweb<br />
<br />
You can enable logging by editing the ExecStart line in the unit file above to include the {{Ic|--logfile <logfile location>}} directive. To enable verbose logging, you can also include the {{Ic|--verbose}} directive. Make sure the vbox user can create and write to the logile you are configuring.<br />
<br />
[[Start]]/[[enable]] {{ic|vboxweb-mod.service}}<br />
<br />
== Connecting RemoteBox to the vboxweb service ==<br />
Open RemoteBox and click the {{Ic|Connect}} button. Specify the following:<br />
URL: http:<your server ip>:18083<br />
Username: vbox<br />
Password: <password recorded earlier><br />
<br />
To make it easier to connect during future sessions, after logging in goto {{ic|File|Connection Profiles}} and create a new connection profile.<br />
<br />
== Troubleshooting ==<br />
If you encounter a login problem connecting to the server, first check that the service is running. From the server console, use<br />
# systemctl status vboxweb-mod.service<br />
<br />
It should output that it is running. If not, check logging with {{ic|journalctl}} and, if you configured a {{ic|logfile}}, the vboxweb service logfile for any leads.<br />
<br />
Even on verbose, vboxweb service might not give you any lead as to what is the problem. In that case, you can become {{ic|vbox}} and run {{ic|vboxwebsrv}} from the command line.<br />
<br />
# su vbox<br />
<br />
Then manually start vboxwebsrv:<br />
$ /usr/bin/vboxwebsrv --pidfile /run/vboxweb/vboxweb.pid --host <your server ip><br />
<br />
Omit the {{ic|--background}} and {{ic|--logfile}} directives. If the service starts, the problem could be permissions to the logfile. Leave it running and check if you can connect with RemoteBox from the client.<br />
<br />
If you still can't connect, you can stop the service wiht <ctrl-c> and start it with the {{ic|--background}} directive. Next, check using netstat or something similar whether vboxwebsrv is listening on port 18083. If you see a different port, try connecting your RemoteBox on that port instead.<br />
<br />
Another reason could be a firewall, either on your server, or on your client.<br />
<br />
If you are getting the following error message:<br />
vboxwebsrv: error: failed to initialize COM! hrc=NS_ERROR_FAILURE<br />
<br />
Check that your homedir exists and is writable for user 'vbox'. Also, check the {{ic|$HOME/.config/VirtualBox}} gets created and populated with config files.<br />
<br />
== External Resources ==<br />
* [http://remotebox.knobgoblin.org.uk/ RemoteBox Home Page]<br />
* [http://remotebox.knobgoblin.org.uk/docs/remotebox.pdf RemoteBox Manual]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=RemoteBox&diff=480958RemoteBox2017-07-01T15:53:50Z<p>Mouseman: Related box</p>
<hr />
<div>[[Category:Virtualization]]<br />
{{Related articles start}}<br />
{{Related|VirtualBox}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|PhpVirtualBox}}<br />
{{Related articles end}}<br />
'''RemoteBox''' is an Open Source VirtualBox Client with Remote Management implementation of the [[VirtualBox]] user interface written in [[Perl]]. In essence, you can remotely administer (ie over the network) an installation of VirtualBox on a server, including its guests and interact with them as if they were running locally. VirtualBox is installed on 'the server' machine and RemoteBox runs on 'the client' machine. RemoteBox provides a complete GTK graphical interface with a look and feel very similar to that of VirtualBox's native GUI. If you're familiar with other virtualization software, such as VMWare ESX, then think of RemoteBox as the "poor man's" VI client.<br />
<br />
== Installation ==<br />
RemoteBox can be installed on the client with the {{AUR|remotebox}} package. It will pull in all the required GTK2 and Perl packages. However, an RDP client is also required, such as FreeRDP or rdesktop and needs to be manually installed. As of this writing, {{AUR|freerdp-git}} 2.0.0.beta1 has been tested and found working.<br />
<br />
=== VirtualBox web service ===<br />
To use RemoteBox, you must have [[VirtualBox]] installed on your server, along with {{AUR|virtualbox-ext-oracle}} package. For a headless server not running a GUI, installing {{AUR|virtualbox-headless}} is suggested. It is also suggested to install {{Pkg|virtualbox-guest-iso}} on your server too.<br />
<br />
On your server running VirtualBox, create a new user with a homedir and login shell, for example:<br />
# useradd -m -g vboxusers -s /bin/bash vbox<br />
<br />
This will create a new user 'vbox' with its primary group as 'vboxusers', a homedir and a login shell. The homedir is required for storing VirtualBox settings and configurations for virtual machines. The shell is required because otherwise, RemoteBox won't be able to login. Now give it a password and record it somewhere safe:<br />
<br />
# passwd vbox<br />
<br />
Create a custom {{Ic|vboxweb-mod.service}} file by copying {{Ic|/usr/lib/systemd/system/vboxweb.service}} to {{Ic|/usr/lib/systemd/system/vboxweb-mod.service}}<br />
<br />
Modify {{Ic|/usr/lib/systemd/system/vboxweb-mod.service}} as follows:<br />
<nowiki> [Unit]<br />
Description=VirtualBox Web Service<br />
After=network.target<br />
<br />
[Service]<br />
Type=forking<br />
PIDFile=/run/vboxweb/vboxweb.pid<br />
ExecStart=/usr/bin/vboxwebsrv --pidfile /run/vboxweb/vboxweb.pid --host <your server ip> --background<br />
User=vbox<br />
Group=vboxusers<br />
<br />
[Install]<br />
WantedBy=multi-user.target</nowiki><br />
<br />
Note: Do not forget to edit <your server ip> with your server's main IP address.<br />
<br />
Create a tmpfile rule for your {{Ic|vboxweb-mod.service}}<br />
# echo "d /run/vboxweb 0755 vbox vboxusers" > /etc/tmpfiles.d/vboxweb-mod.conf<br />
<br />
Manually create the {{Ic|/run/vboxweb}} directory for first start {{Ic|vboxweb-mod.service}}<br />
# mkdir /run/vboxweb<br />
# chown vbox:vboxusers /run/vboxweb<br />
# chmod 755 /run/vboxweb<br />
<br />
You can enable logging by editing the ExecStart line in the unit file above to include the {{Ic|--logfile <logfile location>}} directive. To enable verbose logging, you can also include the {{Ic|--verbose}} directive. Make sure the vbox user can create and write to the logile you are configuring.<br />
<br />
[[Start]]/[[enable]] {{ic|vboxweb-mod.service}}<br />
<br />
== Connecting RemoteBox to the vboxweb service ==<br />
Open RemoteBox and click the {{Ic|Connect}} button. Specify the following:<br />
URL: http:<your server ip>:18083<br />
Username: vbox<br />
Password: <password recorded earlier><br />
<br />
To make it easier to connect during future sessions, after logging in goto {{ic|File|Connection Profiles}} and create a new connection profile.<br />
<br />
== Troubleshooting ==<br />
If you encounter a login problem connecting to the server, first check that the service is running. From the server console, use<br />
# systemctl status vboxweb-mod.service<br />
<br />
It should output that it is running. If not, check logging with {{ic|journalctl}} and, if you configured a {{ic|logfile}}, the vboxweb service logfile for any leads.<br />
<br />
Even on verbose, vboxweb service might not give you any lead as to what is the problem. In that case, you can become {{ic|vbox}} and run {{ic|vboxwebsrv}} from the command line.<br />
<br />
# su vbox<br />
<br />
Then manually start vboxwebsrv:<br />
$ /usr/bin/vboxwebsrv --pidfile /run/vboxweb/vboxweb.pid --host <your server ip><br />
<br />
Omit the {{ic|--background}} and {{ic|--logfile}} directives. If the service starts, the problem could be permissions to the logfile. Leave it running and check if you can connect with RemoteBox from the client.<br />
<br />
If you still can't connect, you can stop the service wiht <ctrl-c> and start it with the {{ic|--background}} directive. Next, check using netstat or something similar whether vboxwebsrv is listening on port 18083. If you see a different port, try connecting your RemoteBox on that port instead.<br />
<br />
Another reason could be a firewall, either on your server, or on your client.<br />
<br />
If you are getting the following error message:<br />
vboxwebsrv: error: failed to initialize COM! hrc=NS_ERROR_FAILURE<br />
<br />
Check that your homedir exists and is writable for user 'vbox'. Also, check the {{ic|$HOME/.config/VirtualBox}} gets created and populated with config files.<br />
<br />
== External Resources ==<br />
* [http://remotebox.knobgoblin.org.uk/ RemoteBox Home Page]<br />
* [http://remotebox.knobgoblin.org.uk/docs/remotebox.pdf RemoteBox Manual]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=RemoteBox&diff=480957RemoteBox2017-07-01T15:52:23Z<p>Mouseman: Initial Page created.</p>
<hr />
<div>[[Category:Virtualization]]<br />
'''RemoteBox''' is an Open Source VirtualBox Client with Remote Management implementation of the [[VirtualBox]] user interface written in [[Perl]]. In essence, you can remotely administer (ie over the network) an installation of VirtualBox on a server, including its guests and interact with them as if they were running locally. VirtualBox is installed on 'the server' machine and RemoteBox runs on 'the client' machine. RemoteBox provides a complete GTK graphical interface with a look and feel very similar to that of VirtualBox's native GUI. If you're familiar with other virtualization software, such as VMWare ESX, then think of RemoteBox as the "poor man's" VI client.<br />
<br />
== Installation ==<br />
RemoteBox can be installed on the client with the {{AUR|remotebox}} package. It will pull in all the required GTK2 and Perl packages. However, an RDP client is also required, such as FreeRDP or rdesktop and needs to be manually installed. As of this writing, {{AUR|freerdp-git}} 2.0.0.beta1 has been tested and found working.<br />
<br />
=== VirtualBox web service ===<br />
To use RemoteBox, you must have [[VirtualBox]] installed on your server, along with {{AUR|virtualbox-ext-oracle}} package. For a headless server not running a GUI, installing {{AUR|virtualbox-headless}} is suggested. It is also suggested to install {{Pkg|virtualbox-guest-iso}} on your server too.<br />
<br />
On your server running VirtualBox, create a new user with a homedir and login shell, for example:<br />
# useradd -m -g vboxusers -s /bin/bash vbox<br />
<br />
This will create a new user 'vbox' with its primary group as 'vboxusers', a homedir and a login shell. The homedir is required for storing VirtualBox settings and configurations for virtual machines. The shell is required because otherwise, RemoteBox won't be able to login. Now give it a password and record it somewhere safe:<br />
<br />
# passwd vbox<br />
<br />
Create a custom {{Ic|vboxweb-mod.service}} file by copying {{Ic|/usr/lib/systemd/system/vboxweb.service}} to {{Ic|/usr/lib/systemd/system/vboxweb-mod.service}}<br />
<br />
Modify {{Ic|/usr/lib/systemd/system/vboxweb-mod.service}} as follows:<br />
<nowiki> [Unit]<br />
Description=VirtualBox Web Service<br />
After=network.target<br />
<br />
[Service]<br />
Type=forking<br />
PIDFile=/run/vboxweb/vboxweb.pid<br />
ExecStart=/usr/bin/vboxwebsrv --pidfile /run/vboxweb/vboxweb.pid --host <your server ip> --background<br />
User=vbox<br />
Group=vboxusers<br />
<br />
[Install]<br />
WantedBy=multi-user.target</nowiki><br />
<br />
Note: Do not forget to edit <your server ip> with your server's main IP address.<br />
<br />
Create a tmpfile rule for your {{Ic|vboxweb-mod.service}}<br />
# echo "d /run/vboxweb 0755 vbox vboxusers" > /etc/tmpfiles.d/vboxweb-mod.conf<br />
<br />
Manually create the {{Ic|/run/vboxweb}} directory for first start {{Ic|vboxweb-mod.service}}<br />
# mkdir /run/vboxweb<br />
# chown vbox:vboxusers /run/vboxweb<br />
# chmod 755 /run/vboxweb<br />
<br />
You can enable logging by editing the ExecStart line in the unit file above to include the {{Ic|--logfile <logfile location>}} directive. To enable verbose logging, you can also include the {{Ic|--verbose}} directive. Make sure the vbox user can create and write to the logile you are configuring.<br />
<br />
[[Start]]/[[enable]] {{ic|vboxweb-mod.service}}<br />
<br />
== Connecting RemoteBox to the vboxweb service ==<br />
Open RemoteBox and click the {{Ic|Connect}} button. Specify the following:<br />
URL: http:<your server ip>:18083<br />
Username: vbox<br />
Password: <password recorded earlier><br />
<br />
To make it easier to connect during future sessions, after logging in goto {{ic|File|Connection Profiles}} and create a new connection profile.<br />
<br />
== Troubleshooting ==<br />
If you encounter a login problem connecting to the server, first check that the service is running. From the server console, use<br />
# systemctl status vboxweb-mod.service<br />
<br />
It should output that it is running. If not, check logging with {{ic|journalctl}} and, if you configured a {{ic|logfile}}, the vboxweb service logfile for any leads.<br />
<br />
Even on verbose, vboxweb service might not give you any lead as to what is the problem. In that case, you can become {{ic|vbox}} and run {{ic|vboxwebsrv}} from the command line.<br />
<br />
# su vbox<br />
<br />
Then manually start vboxwebsrv:<br />
$ /usr/bin/vboxwebsrv --pidfile /run/vboxweb/vboxweb.pid --host <your server ip><br />
<br />
Omit the {{ic|--background}} and {{ic|--logfile}} directives. If the service starts, the problem could be permissions to the logfile. Leave it running and check if you can connect with RemoteBox from the client.<br />
<br />
If you still can't connect, you can stop the service wiht <ctrl-c> and start it with the {{ic|--background}} directive. Next, check using netstat or something similar whether vboxwebsrv is listening on port 18083. If you see a different port, try connecting your RemoteBox on that port instead.<br />
<br />
Another reason could be a firewall, either on your server, or on your client.<br />
<br />
If you are getting the following error message:<br />
vboxwebsrv: error: failed to initialize COM! hrc=NS_ERROR_FAILURE<br />
<br />
Check that your homedir exists and is writable for user 'vbox'. Also, check the {{ic|$HOME/.config/VirtualBox}} gets created and populated with config files.<br />
<br />
== External Resources ==<br />
* [http://remotebox.knobgoblin.org.uk/ RemoteBox Home Page]<br />
* [http://remotebox.knobgoblin.org.uk/docs/remotebox.pdf RemoteBox Manual]</div>Mousemanhttps://wiki.archlinux.org/index.php?title=User_talk:Mouseman&diff=480915User talk:Mouseman2017-07-01T13:18:19Z<p>Mouseman: Created page with "~~~~"</p>
<hr />
<div>[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 13:18, 1 July 2017 (UTC)</div>Mousemanhttps://wiki.archlinux.org/index.php?title=User:Mouseman&diff=480914User:Mouseman2017-07-01T13:18:08Z<p>Mouseman: Created page with "test"</p>
<hr />
<div>test</div>Mousemanhttps://wiki.archlinux.org/index.php?title=NVIDIA/Tips_and_tricks&diff=480135NVIDIA/Tips and tricks2017-06-20T16:04:59Z<p>Mouseman: Added a fix for screen tearing</p>
<hr />
<div>[[Category:Graphics]]<br />
[[Category:X server]]<br />
[[ja:NVIDIA/Tips and tricks]]<br />
[[ru:NVIDIA/Tips and tricks]]<br />
== Fixing terminal resolution ==<br />
<br />
Transitioning from nouveau may cause your startup terminal to display at a lower resolution. For GRUB, see [[GRUB/Tips and tricks#Setting the framebuffer resolution]] for details.<br />
<br />
== Fixing screen tearing ==<br />
<br />
The following is an abstract of the article found here:<br />
https://www.cmscritic.com/how-to-fix-nvidia-screen-tearing-in-xfce-mate-kde-lxde-and-others/<br />
<br />
If you are suffering from screen tearing using the official Nvidia drivers, you can easily test whether the use of of ForceCompositionPipeline or ForceFullCompositionPipeline provides a fix for you from a terminal. Simply issue the following command:<br />
<br />
$ nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 { ForceCompositionPipeline = On }"<br />
<br />
Your screen may flicker for a second, see if that solved the tearing issue. If this one didn't work for you, you can try using the ForceFullCompositionPipeline as follows:<br />
<br />
$ nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }"<br />
<br />
Once you've determined which mode works for you, you can make it permanent by adding the following line to the {{ic|xorg.conf}} by adding to the {{ic|Screen}} section:<br />
<br />
Option "metamodes" "nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }"<br />
<br />
== Using TV-out ==<br />
<br />
A good article on the subject can be found [http://en.wikibooks.org/wiki/NVidia/TV-OUT here].<br />
<br />
== X with a TV (DFP) as the only display ==<br />
<br />
The X server falls back to CRT-0 if no monitor is automatically detected. This can be a problem when using a DVI connected TV as the main display, and X is started while the TV is turned off or otherwise disconnected.<br />
<br />
To force NVIDIA to use DFP, store a copy of the EDID somewhere in the filesystem so that X can parse the file instead of reading EDID from the TV/DFP.<br />
<br />
To acquire the EDID, start nvidia-settings. It will show some information in tree format, ignore the rest of the settings for now and select the GPU (the corresponding entry should be titled "GPU-0" or similar), click the {{ic|DFP}} section (again, {{ic|DFP-0}} or similar), click on the {{ic|Acquire Edid}} Button and store it somewhere, for example, {{ic|/etc/X11/dfp0.edid}}.<br />
<br />
If in the front-end mouse and keyboard are not attached, the EDID can be acquired using only the command line. Run an X server with enough verbosity to print out the EDID block:<br />
$ startx -- -logverbose 6<br />
After the X Server has finished initializing, close it and your log file will probably be in {{ic|/var/log/Xorg.0.log}}. Extract the EDID block using nvidia-xconfig:<br />
$ nvidia-xconfig --extract-edids-from-file=/var/log/Xorg.0.log --extract-edids-output-file=/etc/X11/dfp0.bin<br />
<br />
Edit {{ic|xorg.conf}} by adding to the {{ic|Device}} section:<br />
Option "ConnectedMonitor" "DFP"<br />
Option "CustomEDID" "DFP-0:/etc/X11/dfp0.edid"<br />
The {{ic|ConnectedMonitor}} option forces the driver to recognize the DFP as if it were connected. The {{ic|CustomEDID}} provides EDID data for the device, meaning that it will start up just as if the TV/DFP was connected during X the process.<br />
<br />
This way, one can automatically start a display manager at boot time and still have a working and properly configured X screen by the time the TV gets powered on.<br />
<br />
If the above changes did not work, in the {{ic|xorg.conf}} under {{ic|Device}} section you can try to remove the {{ic|Option "ConnectedMonitor" "DFP"}} and add the following lines:<br />
Option "ModeValidation" "NoDFPNativeResolutionCheck"<br />
Option "ConnectedMonitor" "DFP-0"<br />
<br />
The {{ic|NoDFPNativeResolutionCheck}} prevents NVIDIA driver from disabling all the modes that do not fit in the native resolution.<br />
<br />
== Check the power source ==<br />
<br />
The NVIDIA X.org driver can also be used to detect the GPU's current source of power. To see the current power source, check the 'GPUPowerSource' read-only parameter (0 - AC, 1 - battery):<br />
<br />
{{hc|$ nvidia-settings -q GPUPowerSource -t|1}}<br />
<br />
== Listening to ACPI events ==<br />
<br />
NVIDIA drivers automatically try to connect to the [[acpid]] daemon and listen to ACPI events such as battery power, docking, some hotkeys, etc. If connection fails, X.org will output the following warning:<br />
<br />
{{hc|~/.local/share/xorg/Xorg.0.log|<br />
NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon<br />
NVIDIA(0): may not be running or the "AcpidSocketPath" X<br />
NVIDIA(0): configuration option may not be set correctly. When the<br />
NVIDIA(0): ACPI event daemon is available, the NVIDIA X driver will<br />
NVIDIA(0): try to use it to receive ACPI event notifications. For<br />
NVIDIA(0): details, please see the "ConnectToAcpid" and<br />
NVIDIA(0): "AcpidSocketPath" X configuration options in Appendix B: X<br />
NVIDIA(0): Config Options in the README.<br />
}}<br />
<br />
While completely harmless, you may get rid of this message by disabling the {{ic|ConnectToAcpid}} option in your {{ic|/etc/X11/xorg.conf.d/20-nvidia.conf}}:<br />
<br />
Section "Device"<br />
...<br />
Driver "nvidia"<br />
Option "ConnectToAcpid" "0"<br />
...<br />
EndSection<br />
<br />
If you are on laptop, it might be a good idea to install and enable the [[acpid]] daemon instead.<br />
<br />
== Displaying GPU temperature in the shell ==<br />
<br />
There are three methods to query the GPU temperature. ''nvidia-settings'' requires that you are using X, ''nvidia-smi'' or ''nvclock'' do not. Also note that ''nvclock'' currently does not work with newer NVIDIA cards such as GeForce 200 series cards as well as embedded GPUs such as the Zotac IONITX's 8800GS.<br />
<br />
=== nvidia-settings ===<br />
<br />
To display the GPU temp in the shell, use ''nvidia-settings'' as follows:<br />
$ nvidia-settings -q gpucoretemp<br />
<br />
This will output something similar to the following:<br />
Attribute 'GPUCoreTemp' (hostname:0.0): 41.<br />
'GPUCoreTemp' is an integer attribute.<br />
'GPUCoreTemp' is a read-only attribute.<br />
'GPUCoreTemp' can use the following target types: X Screen, GPU.<br />
<br />
The GPU temps of this board is 41 C.<br />
<br />
In order to get just the temperature for use in utilities such as ''rrdtool'' or ''conky'':<br />
<br />
{{hc|$ nvidia-settings -q gpucoretemp -t|41}}<br />
<br />
=== nvidia-smi ===<br />
<br />
Use ''nvidia-smi'' which can read temps directly from the GPU without the need to use X at all, e.g. when running Wayland or on a headless server. <br />
To display the GPU temperature in the shell, use ''nvidia-smi'' as follows:<br />
<br />
$ nvidia-smi<br />
<br />
This should output something similar to the following:<br />
<br />
{{hc|$ nvidia-smi|<nowiki><br />
Fri Jan 6 18:53:54 2012 <br />
+------------------------------------------------------+ <br />
| NVIDIA-SMI 2.290.10 Driver Version: 290.10 | <br />
|-------------------------------+----------------------+----------------------+<br />
| Nb. Name | Bus Id Disp. | Volatile ECC SB / DB |<br />
| Fan Temp Power Usage /Cap | Memory Usage | GPU Util. Compute M. |<br />
|===============================+======================+======================|<br />
| 0. GeForce 8500 GT | 0000:01:00.0 N/A | N/A N/A |<br />
| 30% 62 C N/A N/A / N/A | 17% 42MB / 255MB | N/A Default |<br />
|-------------------------------+----------------------+----------------------|<br />
| Compute processes: GPU Memory |<br />
| GPU PID Process name Usage |<br />
|=============================================================================|<br />
| 0. ERROR: Not Supported |<br />
+-----------------------------------------------------------------------------+<br />
</nowiki>}}<br />
<br />
Only for temperature:<br />
<br />
{{hc|$ nvidia-smi -q -d TEMPERATURE|<nowiki><br />
<br />
====NVSMI LOG====<br />
<br />
Timestamp : Sun Apr 12 08:49:10 2015<br />
Driver Version : 346.59<br />
<br />
Attached GPUs : 1<br />
GPU 0000:01:00.0<br />
Temperature<br />
GPU Current Temp : 52 C<br />
GPU Shutdown Temp : N/A<br />
GPU Slowdown Temp : N/A<br />
<br />
</nowiki>}}<br />
<br />
In order to get just the temperature for use in utilities such as ''rrdtool'' or ''conky'':<br />
<br />
{{hc|<nowiki>$ nvidia-smi --query-gpu=temperature.gpu --format=csv,noheader,nounits</nowiki>|52}}<br />
<br />
Reference: http://www.question-defense.com/2010/03/22/gpu-linux-shell-temp-get-nvidia-gpu-temperatures-via-linux-cli.<br />
<br />
=== nvclock ===<br />
<br />
Use {{AUR|nvclock}} which is available from the [[AUR]].<br />
<br />
{{Note|''nvclock'' cannot access thermal sensors on newer NVIDIA cards such as Geforce 200 series cards.}}<br />
<br />
There can be significant differences between the temperatures reported by ''nvclock'' and ''nvidia-settings''/''nv-control''. According to [http://sourceforge.net/projects/nvclock/forums/forum/67426/topic/1906899 this post] by the author (thunderbird) of ''nvclock'', the ''nvclock'' values should be more accurate.<br />
<br />
== Set fan speed at login ==<br />
<br />
{{Poor writing|Refer to [[#Enabling overclocking]] for description of ''Coolbits''.}}<br />
<br />
You can adjust the fan speed on your graphics card with ''nvidia-settings''' console interface. First ensure that your Xorg configuration sets the Coolbits option to {{ic|4}}, {{ic|5}} or {{ic|12}} for fermi and above in your {{ic|Device}} section to enable fan control.<br />
<br />
Option "Coolbits" "4"<br />
<br />
{{Note|GeForce 400/500 series cards cannot currently set fan speeds at login using this method. This method only allows for the setting of fan speeds within the current X session by way of nvidia-settings.}}<br />
<br />
Place the following line in your [[xinitrc]] file to adjust the fan when you launch Xorg. Replace {{ic|''n''}} with the fan speed percentage you want to set.<br />
<br />
nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUCurrentFanSpeed=''n''"<br />
<br />
You can also configure a second GPU by incrementing the GPU and fan number.<br />
<br />
nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUCurrentFanSpeed=''n''" \<br />
-a "[gpu:1]/GPUFanControlState=1" -a [fan:1]/GPUCurrentFanSpeed=''n''" &<br />
<br />
If you use a login manager such as GDM or KDM, you can create a desktop entry file to process this setting. Create {{ic|~/.config/autostart/nvidia-fan-speed.desktop}} and place this text inside it. Again, change {{ic|''n''}} to the speed percentage you want.<br />
<br />
[Desktop Entry]<br />
Type=Application<br />
Exec=nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUCurrentFanSpeed=''n''"<br />
X-GNOME-Autostart-enabled=true<br />
Name=nvidia-fan-speed<br />
<br />
{{Note|Since the drivers version 349.16, {{ic|GPUCurrentFanSpeed}} has to be replaced with {{ic|GPUTargetFanSpeed}}.[https://devtalk.nvidia.com/default/topic/821563/linux/can-t-control-fan-speed-with-beta-driver-349-12/post/4526208/#4526208]}}<br />
<br />
To make it possible to adjust the fanspeed of more than one graphics card, run:<br />
$ nvidia-xconfig --enable-all-gpus<br />
$ nvidia-xconfig --cool-bits=4<br />
<br />
== Manual configuration ==<br />
<br />
Several tweaks (which cannot be enabled [[NVIDIA#Automatic configuration|automatically]] or with the [[NVIDIA#NVIDIA Settings|GUI]]) can be performed by editing your [[NVIDIA#Minimal configuration|config]] file. The Xorg server will need to be restarted before any changes are applied.<br />
<br />
See [ftp://download.nvidia.com/XFree86/Linux-x86/355.11/README/README.txt NVIDIA Accelerated Linux Graphics Driver README and Installation Guide] for additional details and options.<br />
<br />
=== Disabling the logo on startup ===<br />
<br />
Add the {{ic|"NoLogo"}} option under section {{ic|Device}}:<br />
Option "NoLogo" "1"<br />
<br />
=== Overriding monitor detection ===<br />
<br />
The {{ic|"ConnectedMonitor"}} option under section {{ic|Device}} allows to override monitor detection when X server starts, which may save a significant amount of time at start up. The available options are: {{ic|"CRT"}} for analog connections, {{ic|"DFP"}} for digital monitors and {{ic|"TV"}} for televisions.<br />
<br />
The following statement forces the NVIDIA driver to bypass startup checks and recognize the monitor as DFP:<br />
Option "ConnectedMonitor" "DFP"<br />
{{Note| Use "CRT" for all analog 15 pin VGA connections, even if the display is a flat panel. "DFP" is intended for DVI, HDMI, or DisplayPort digital connections only.}}<br />
<br />
=== Enabling brightness control ===<br />
<br />
Add under section {{ic|Device}}:<br />
Option "RegistryDwords" "EnableBrightnessControl=1"<br />
<br />
If brightness control still does not work with this option, try installing {{AUR|nvidia-bl}} or {{AUR|nvidiabl}}.<br />
<br />
{{Note|Installing either {{AUR|nvidia-bl}} or {{AUR|nvidiabl}} will provide a {{ic|/sys/class/backlight/nvidia_backlight/}} interface to backlight brightness control, but your system may continue to issue backlight control changes on {{ic|/sys/class/backlight/acpi_video0/}}. One solution in this case is to watch for changes on, e.g. {{ic|acpi_video0/brightness}} with ''inotifywait'' and to translate and write to {{ic|nvidia_backlight/brightness}} accordingly. See [[Backlight#sysfs modified but no brightness change]].}}<br />
<br />
=== Enabling SLI ===<br />
<br />
{{Warning|As of May 7, 2011, you may experience sluggish video performance in GNOME 3 after enabling SLI.}}<br />
{{Warning|Since the GTX 10xx Series (1080, 1070, 1060, etc) only 2-way SLI is supported. 3-way and 4-way SLI may work for CUDA/OpenCL applications, but will most likely break all OpenGL applications.}}<br />
<br />
Taken from the NVIDIA driver's [ftp://download.nvidia.com/XFree86/Linux-x86/355.11/README/xconfigoptions.html README] Appendix B: ''This option controls the configuration of SLI rendering in supported configurations.'' A "supported configuration" is a computer equipped with an SLI-Certified Motherboard and 2 or 3 SLI-Certified GeForce GPUs. See NVIDIA's [http://www.slizone.com/page/home.html SLI Zone] for more information.<br />
<br />
Find the first GPU's PCI Bus ID using {{ic|lspci}}:<br />
{{hc|<nowiki>$ lspci | grep VGA</nowiki>|<br />
03:00.0 VGA compatible controller: nVidia Corporation G92 [GeForce 8800 GTS 512] (rev a2)<br />
05:00.0 VGA compatible controller: nVidia Corporation G92 [GeForce 8800 GTS 512] (rev a2)<br />
}}<br />
<br />
Add the BusID (3 in the previous example) under section {{ic|Device}}:<br />
BusID "PCI:3:0:0"<br />
<br />
{{Note|The format is important. The BusID value must be specified as {{ic|"PCI:<BusID>:0:0"}}}}<br />
<br />
Add the desired SLI rendering mode value under section {{ic|Screen}}:<br />
Option "SLI" "AA"<br />
<br />
The following table presents the available rendering modes.<br />
<br />
{| class="wikitable"<br />
! Value !! Behavior<br />
|-<br />
| 0, no, off, false, Single || Use only a single GPU when rendering.<br />
|-<br />
| 1, yes, on, true, Auto || Enable SLI and allow the driver to automatically select the appropriate rendering mode.<br />
|-<br />
| AFR || Enable SLI and use the alternate frame rendering mode.<br />
|-<br />
| SFR || Enable SLI and use the split frame rendering mode.<br />
|-<br />
| AA || Enable SLI and use SLI antialiasing. Use this in conjunction with full scene antialiasing to improve visual quality.<br />
|}<br />
<br />
Alternatively, you can use the {{ic|nvidia-xconfig}} utility to insert these changes into {{ic|xorg.conf}} with a single command:<br />
# nvidia-xconfig --busid=PCI:3:0:0 --sli=AA<br />
<br />
To verify that SLI mode is enabled from a shell:<br />
{{hc|<nowiki>$ nvidia-settings -q all | grep SLIMode</nowiki>|<br />
Attribute 'SLIMode' (arch:0.0): AA <br />
'SLIMode' is a string attribute.<br />
'SLIMode' is a read-only attribute.<br />
'SLIMode' can use the following target types: X Screen.<br />
}}<br />
<br />
{{Warning| After enabling SLI, your system may become frozen/non-responsive upon starting xorg. It is advisable that you disable your display manager before restarting.}}<br />
<br />
=== Enabling overclocking ===<br />
<br />
{{Warning|Please note that overclocking may damage hardware and that no responsibility may be placed on the authors of this page due to any damage to any information technology equipment from operating products out of specifications set by the manufacturer.}}<br />
<br />
Overclocking is controlled via ''Coolbits'' option in the {{ic|Device}} section, which enables various unsupported features:<br />
Option "Coolbits" "''value''"<br />
<br />
{{Tip|The ''Coolbits'' option can be easily controlled with the ''nvidia-xconfig'', which manipulates the Xorg configuration files: {{bc|1=# nvidia-xconfig --cool-bits=''value''}}}}<br />
<br />
The ''Coolbits'' value is the sum of its component bits in the binary numeral system. The component bits are:<br />
<br />
* {{ic|1}} (bit 0) - Enables overclocking of older (pre-Fermi) cores on the ''Clock Frequencies'' page in ''nvidia-settings''.<br />
* {{ic|2}} (bit 1) - When this bit is set, the driver will "attempt to initialize SLI when using GPUs with different amounts of video memory".<br />
* {{ic|4}} (bit 2) - Enables manual configuration of GPU fan speed on the ''Thermal Monitor'' page in ''nvidia-settings''.<br />
* {{ic|8}} (bit 3) - Enables overclocking on the ''PowerMizer'' page in ''nvidia-settings''. Available since version 337.12 for the Fermi architecture and newer.[http://www.phoronix.com/scan.php?px=MTY1OTM&page=news_item]<br />
* {{ic|16}} (bit 4) - Enables overvoltage using ''nvidia-settings'' CLI options. Available since version 346.16 for the Fermi architecture and newer.[http://www.phoronix.com/scan.php?page=news_item&px=MTg0MDI]<br />
<br />
To enable multiple features, add the ''Coolbits'' values together. For example, to enable overclocking and overvoltage of Fermi cores, set {{ic|Option "Coolbits" "24"}}.<br />
<br />
The documentation of ''Coolbits'' can be found in {{ic|/usr/share/doc/nvidia/html/xconfigoptions.html}}. Driver version 346.16 documentation on ''Coolbits'' can be found online [ftp://download.nvidia.com/XFree86/Linux-x86/355.11/README/xconfigoptions.html here].<br />
<br />
{{Note|An alternative is to edit and reflash the GPU BIOS either under DOS (preferred), or within a Win32 environment by way of [http://www.mvktech.net/component/option,com_remository/Itemid,26/func,select/id,127/orderby,2/page,1/ nvflash]{{Dead link|2013|05|25}} and [http://www.mvktech.net/component/option,com_remository/Itemid,26/func,select/id,135/orderby,2/page,1/ NiBiTor 6.0]{{Dead link|2013|05|25}}. The advantage of BIOS flashing is that not only can voltage limits be raised, but stability is generally improved over software overclocking methods such as Coolbits. [http://ivanvojtko.blogspot.sk/2014/03/how-to-overclock-geforce-460gtx-fermi.html Fermi BIOS modification tutorial]}}<br />
<br />
==== Setting static 2D/3D clocks ====<br />
<br />
Set the following string in the {{ic|Device}} section to enable PowerMizer at its maximum performance level (VSync will not work without this line):<br />
Option "RegistryDwords" "PerfLevelSrc=0x2222"</div>Mousemanhttps://wiki.archlinux.org/index.php?title=PulseAudio/Troubleshooting&diff=415441PulseAudio/Troubleshooting2016-01-15T08:29:37Z<p>Mouseman: /* Finding out your audio device parameters (1/4) */</p>
<hr />
<div>[[Category:Sound]]<br />
[[it:PulseAudio/Troubleshooting]]<br />
[[ja:PulseAudio/トラブルシューティング]]<br />
[[ru:PulseAudio/Troubleshooting]]<br />
See [[PulseAudio]] for the main article.<br />
<br />
== Volume ==<br />
<br />
Here you will find some hints on volume issues and why you may not hear anything.<br />
<br />
=== Auto-Mute Mode ===<br />
<br />
Auto-Mute Mode may be enabled. It can be disabled using {{ic|alsamixer}}.<br />
<br />
See http://superuser.com/questions/431079/how-to-disable-auto-mute-mode for more.<br />
<br />
To save your current settings as the default options, run {{ic|alsactl store}} as root.<br />
<br />
=== Muted audio device ===<br />
<br />
If one experiences no audio output via any means while using [[ALSA]], attempt to unmute the sound card. To do this, launch {{ic|alsamixer}} and make sure each column has a green {{ic|00}} under it (this can be toggled by pressing {{ic|m}}):<br />
<br />
$ alsamixer -c 0<br />
<br />
{{Note|alsamixer will not tell you which output device is set as the default. One possible cause of no sound after install is that PulseAudio detects the wrong output device as a default. Install {{Pkg|pavucontrol}} and check if there is any output on the pavucontrol panel when playing a ''.wav'' file.}}<br />
<br />
=== Muted application ===<br />
<br />
If a specific application is muted or low while all else seems to be in order, it may be due to individual {{ic|sink-input}} settings. With the offending application playing audio, run:<br />
<br />
$ pacmd list-sink-inputs<br />
<br />
Find and make note of the {{ic|index}} of the corresponding {{ic|sink input}}. The {{ic|properties:}} {{ic|application.name}} and {{ic|application.process.binary}}, among others, should help here. Ensure sane settings are present, specifically those of {{ic|muted}} and {{ic|volume}}.<br />
If the sink is muted, it can be unmuted by:<br />
<br />
$ pacmd set-sink-input-mute <index> false<br />
<br />
If the volume needs adjusting, it can be set to 100% by:<br />
<br />
$ pacmd set-sink-input-volume <index> 0x10000<br />
<br />
{{Note|If {{ic|pacmd}} reports {{ic|0 sink input(s)}}, double-check that the application is playing audio. If it is still absent, verify that other applications show up as sink inputs.}}<br />
<br />
=== Volume adjustment does not work properly ===<br />
<br />
Check:<br />
{{ic|/usr/share/pulseaudio/alsa-mixer/paths/analog-output.conf.common}}<br />
<br />
If the volume does not appear to increment/decrement properly using {{ic|alsamixer}} or {{ic|amixer}}, it may be due to PulseAudio having a larger number of increments (65537 to be exact). Try using larger values when changing volume (e.g. {{ic|amixer set Master 655+}}).<br />
<br />
=== Per-application volumes change when the Master volume is adjusted ===<br />
<br />
This is because PulseAudio uses flat volumes by default, instead of relative volumes, relative to an absolute master volume. If this is found to be inconvenient, asinine, or otherwise undesireable, relative volumes can be enabled by disabling flat volumes in the PulseAudio daemon's configuration file:<br />
<br />
{{hc|/etc/pulse/daemon.conf or ~/.config/pulse/daemon.conf|<nowiki><br />
flat-volumes = no<br />
</nowiki>}}<br />
<br />
and then restarting PulseAudio by executing<br />
<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
=== Volume gets louder every time a new application is started ===<br />
<br />
Per default, it seems as if changing the volume in an application sets the global system volume to that level instead of only affecting the respective application. Applications setting their volume on startup will therefore cause the system volume to "jump".<br />
<br />
Fix this by disabling flat volumes, as demonstrated in the previous section. When Pulse comes back after a few seconds, applications will not alter the global system volume anymore but have their own volume level again.<br />
<br />
{{Note|A previously installed and removed pulseaudio-equalizer may leave behind remnants of the setup in {{ic|~/.config/pulse/default.pa}} or {{ic|~/.pulse/default.pa}} which can also cause maximized volume trouble. Comment that out as needed.}}<br />
<br />
=== Sound output is only mono on M-Audio Audiophile 2496 sound card ===<br />
<br />
Add the following:<br />
<br />
{{hc|/etc/pulseaudio/default.pa|<nowiki><br />
load-module module-alsa-sink sink_name=delta_out device=hw:M2496 format=s24le channels=10 channel_map=left,right,aux0,aux1,aux2,aux3,aux4,aux5,aux6,aux7<br />
load-module module-alsa-source source_name=delta_in device=hw:M2496 format=s24le channels=12 channel_map=left,right,aux0,aux1,aux2,aux3,aux4,aux5,aux6,aux7,aux8,aux9<br />
set-default-sink delta_out<br />
set-default-source delta_in<br />
</nowiki>}}<br />
<br />
=== No sound below a volume cutoff ===<br />
<br />
Known issue (won't fix): https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/223133<br />
<br />
If sound does not play when PulseAudio's volume is set below a certain level, try setting {{ic|1=ignore_dB=1}} in {{ic|/etc/pulse/default.pa}}:<br />
{{hc|/etc/pulse/default.pa|<nowiki><br />
load-module module-udev-detect ignore_dB=1<br />
</nowiki>}}<br />
<br />
However, be aware that it may cause another bug preventing PulseAudio to unmute speakers when headphones or other audio devices are unplugged.<br />
<br />
=== Low volume for internal microphone ===<br />
<br />
If you experience low volume on internal notebook microphone, try setting:<br />
{{hc|/etc/pulse/default.pa|<nowiki><br />
set-source-volume 1 300000<br />
</nowiki>}}<br />
<br />
=== Clients alter master output volume (a.k.a. volume jumps to 100% after running application) ===<br />
<br />
If changing the volume in specific applications or simply running an application changes the master output volume this is likely due to flat volumes mode of pulseaudio. Before disabling it, KDE users should try lowering their system notifications volume in ''System Settings -> Application and System Notifications -> Manage Notifications'' under the ''Player Settings'' tab to something reasonable. Changing the ''Event Sounds'' volume in KMix or another volume mixer application will not help here. This should make the flat-volumes mode work out as intended, if it does not work, some other application is likely requesting 100% volume when its playing something. If all else fails, you can try to disable flat-volumes:<br />
<br />
{{hc|/etc/pulse/daemon.conf|<nowiki><br />
flat-volumes = no<br />
</nowiki>}}<br />
<br />
Then restart PulseAudio daemon:<br />
<br />
# pulseaudio -k<br />
# pulseaudio --start<br />
<br />
=== No sound after resume from suspend ===<br />
<br />
If audio generally works, but stops after resume from suspend, try "reloading" PulseAudio by executing:<br />
$ /usr/bin/pasuspender /bin/true<br />
<br />
This is better than completely killing and restarting it ({{ic|pulseaudio -k}} followed by {{ic|pulseaudio --start}}), because it doesn't break already running applications.<br />
<br />
If the above fixes your problem, you may wish to automate it, by creating a systemd service file.<br />
<br />
1. Create the template service file in {{ic|/etc/systemd/system/resume-fix-pulseaudio@.service}}:<br />
<br />
[Unit]<br />
Description=Fix PulseAudio after resume from suspend<br />
After=suspend.target<br />
<br />
[Service]<br />
User=%I<br />
Type=oneshot<br />
Environment="XDG_RUNTIME_DIR=/run/user/%U"<br />
ExecStart=/usr/bin/pasuspender /bin/true<br />
<br />
[Install]<br />
WantedBy=suspend.target<br />
<br />
2. Enable it for your user account<br />
<br />
# systemctl enable resume-fix-pulseaudio@YOUR_USERNAME_HERE.service<br />
<br />
3. Reload systemd<br />
<br />
# systemctl --system daemon-reload<br />
<br />
=== ALSA channels mute when headphones are plugged/unplugged improperly ===<br />
<br />
If when you unplug your headphones or plug them in the audio remains muted in alsamixer on the wrong channel due to it being set to 0%, you may be able to fix it by opening {{ic|/etc/pulse/default.pa}} and commenting out the line:<br />
<br />
load-module module-switch-on-port-available<br />
<br />
== Microphone ==<br />
<br />
=== Microphone not detected by PulseAudio ===<br />
<br />
Determine the card and device number of your mic:<br />
<br />
$ arecord -l<br />
**** List of CAPTURE Hardware Devices ****<br />
card 0: PCH [HDA Intel PCH], device 0: ALC269VC Analog [ALC269VC Analog]<br />
Subdevices: 1/1<br />
Subdevice #0: subdevice #0<br />
<br />
In hw:CARD,DEVICE notation, you would specify the above device as {{ic|hw:0,0}}.<br />
<br />
Then, edit {{ic|/etc/pulse/default.pa}} and insert a {{ic|load-module}} line specifying your device as follows:<br />
<br />
load-module module-alsa-source device=hw:0,0<br />
# the line above should be somewhere before the line below<br />
.ifexists module-udev-detect.so<br />
<br />
Finally, restart pulseaudio to apply the new settings:<br />
<br />
$ pulseaudio -k ; pulseaudio -D<br />
<br />
If everything worked correctly, you should now see your mic show up when running {{ic|pavucontrol}} (under the {{ic|Input Devices}} tab).<br />
<br />
=== PulseAudio uses wrong microphone ===<br />
<br />
If PulseAudio uses the wrong microphone, and changing the Input Device with Pavucontrol did not help, take a look at alsamixer. It seems that Pavucontrol does not always set the input source correctly.<br />
<br />
$ alsamixer<br />
<br />
Press {{ic|F6}} and choose your sound card, e.g. HDA Intel. Now press {{ic|F5}} to display all items. Try to find the item: {{ic|Input Source}}. With the up/down arrow keys you are able to change the input source.<br />
<br />
Now try if the correct microphone is used for recording.<br />
<br />
=== No microphone on ThinkPad T400/T500/T420 ===<br />
<br />
Run:<br />
<br />
alsamixer -c 0<br />
<br />
Unmute and maximize the volume of the "Internal Mic".<br />
<br />
Once you see the device with:<br />
<br />
arecord -l<br />
<br />
you might still need to adjust the settings. The microphone and the audio jack are duplexed. Set the configuration of the internal audio in pavucontrol to ''Analog Stereo Duplex''.<br />
<br />
=== No microphone input on Acer Aspire One ===<br />
<br />
Install pavucontrol, unlink the microphone channels and turn down the left one to 0.<br />
Reference: http://getsatisfaction.com/jolicloud/topics/deaf_internal_mic_on_acer_aspire_one#reply_2108048<br />
<br />
=== Static noise in microphone recording ===<br />
<br />
If we are getting static noise in Skype, gnome-sound-recorder, arecord, etc.'s recordings, then the sound card sample rate is incorrect. That is why there is static noise in Linux microphone recordings. To fix this, we need to set the sampling rate in {{ic|/etc/pulse/daemon.conf}} for the sound hardware.<br />
<br />
==== Determine sound cards in the system (1/5) ====<br />
<br />
This requires {{Pkg|alsa-utils}} and related packages to be installed:<br />
{{hc|$ arecord --list-devices|<br />
**** List of CAPTURE Hardware Devices ****<br />
card 0: Intel [HDA Intel], device 0: ALC888 Analog [ALC888 Analog]<br />
Subdevices: 1/1<br />
Subdevice #0: subdevice #0<br />
card 0: Intel [HDA Intel], device 2: ALC888 Analog [ALC888 Analog]<br />
Subdevices: 1/1<br />
Subdevice #0: subdevice #0<br />
}}<br />
<br />
Sound card is {{ic|hw:0,0}}.<br />
<br />
==== Determine sampling rate of the sound card (2/5) ====<br />
<br />
{{hc|1=arecord -f dat -r 60000 -D hw:0,0 -d 5 test.wav|2=<br />
"Recording WAVE 'test.wav' : Signed 16 bit Little Endian, Rate 60000 Hz, Stereo<br />
Warning: rate is not accurate (requested = 60000Hz, '''got = 96000Hz''')<br />
please, try the plug plugin<br />
}}<br />
<br />
observe, the {{ic|1=got = 96000Hz}}. This is the maximum sampling rate of our card.<br />
<br />
==== Setting the sound card's sampling rate into PulseAudio configuration (3/5) ====<br />
<br />
The default sampling rate in PulseAudio:<br />
{{hc|1=$ grep "default-sample-rate" /etc/pulse/daemon.conf|2=<br />
; default-sample-rate = 44100<br />
}}<br />
<br />
{{ic|44100}} is disabled and needs to be changed to {{ic|96000}}:<br />
# sed 's/; default-sample-rate = 44100/default-sample-rate = 96000/g' -i /etc/pulse/daemon.conf<br />
<br />
==== Restart PulseAudio to apply the new settings (4/5) ====<br />
<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
==== Finally check by recording and playing it back (5/5) ====<br />
<br />
Let us record some voice using a microphone for, say, 10 seconds. Make sure the microphone is not muted and all<br />
<br />
$ arecord -f cd -d 10 test-mic.wav<br />
<br />
After 10 seconds, let us play the recording...<br />
<br />
$ aplay test-mic.wav<br />
<br />
Now hopefully, there is no static noise in microphone recording anymore.<br />
<br />
=== No microphone on Steam or Skype with enable-remixing = no ===<br />
<br />
When you set {{ic|1=enable-remixing = no}} on {{ic|/etc/pulse/daemon.conf}} you may find that your microphone has stopped working on certain applications like Skype or Steam. This happens because these applications capture the microphone as mono only and because remixing is disabled, Pulseaudio will no longer remix your stereo microphone to mono.<br />
<br />
To fix this you need to tell Pulseaudio to do this for you:<br />
<br />
1. Find the name of the source <br />
<br />
# pacmd list-sources<br />
<br />
Example output edited for brevity, the name you need is in bold:<br />
<br />
index: 2<br />
name: <'''alsa_input.pci-0000_00_14.2.analog-stereo'''><br />
driver: <module-alsa-card.c><br />
flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY<br />
<br />
2. Add a remap rule to {{ic|/etc/pulse/default.pa}}, use the name you found with the previous command, here we will use '''alsa_input.pci-0000_00_14.2.analog-stereo''' as an example:<br />
<br />
{{hc|/etc/pulse/default.pa|<nowiki><br />
### Remap microphone to mono<br />
load-module module-remap-source master=alsa_input.pci-0000_00_14.2.analog-stereo master_channel_map=front-left,front-right channels=2 channel_map=mono,mono<br />
</nowiki>}}<br />
<br />
3. Restart Pulseaudio<br />
<br />
# pulseaudio -k<br />
<br />
{{Note|Pulseaudio may fail to start if you don't exit a program that was using the microphone (e.g. if you tested on Steam before modifying the file), in which case you should exit the application and manually start Pulseaudio}}<br />
<br />
# pulseaudio --start<br />
<br />
== Audio quality ==<br />
<br />
=== Enable Echo/Noise-Cancelation ===<br />
<br />
Arch doesn't load the Pulseaudio Echo-Cancelation module by default, therefore, we have to add it in {{ic|/etc/pulse/default.pa}}. First you can test if the module is present with {{ic|pacmd}} and entering {{ic|list-modules}}. If you can't find a line showing {{ic|name: <module-echo-cancel>}} you have to add <br />
<br />
{{hc|/etc/pulse/default.pa|<br />
### Enable Echo/Noise-Cancelation<br />
load-module module-echo-cancel<br />
}}<br />
<br />
then restart Pulseaudio<br />
<br />
pulseaudio -k<br />
pulseaudio --start<br />
<br />
and check if the module is activated by starting {{ic|pavucontrol}}. Under {{ic|Recoding}} the input device should show {{ic|Echo-Cancel Source Stream from"}}<br />
<br />
=== Glitches, skips or crackling ===<br />
<br />
The newer implementation of the PulseAudio sound server uses timer-based audio scheduling instead of the traditional, interrupt-driven approach. <br />
<br />
Timer-based scheduling may expose issues in some ALSA drivers. On the other hand, other drivers might be glitchy without it on, so check to see what works on your system. <br />
<br />
To turn timer-based scheduling off add {{ic|1=tsched=0}} in {{ic|/etc/pulse/default.pa}}:<br />
{{hc|/etc/pulse/default.pa|2=<br />
load-module module-udev-detect tsched=0<br />
}}<br />
<br />
Then restart the PulseAudio server:<br />
<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
Do the reverse to enable timer-based scheduling, if not already enabled by default.<br />
<br />
If you are using Intel's [[Wikipedia:IOMMU|IOMMU]] and experience glitches and/or skips, add {{ic|1=intel_iommu=igfx_off}} to your kernel command line.<br />
<br />
Some Intel audio cards using the {{ic|snd-hda-intel}} module need the otions {{ic|1=vid=8086 pid=8ca0 snoop=0}}. In order to set them permanently, create/modify the following file including the line below.<br />
{{hc|/etc/modprobe.d/sound.conf|2=<br />
options snd-hda-intel vid=8086 pid=8ca0 snoop=0<br />
}}<br />
<br />
Please report any such cards to [http://www.freedesktop.org/wiki/Software/PulseAudio/Backends/ALSA/BrokenDrivers/ PulseAudio Broken Sound Driver page]<br />
<br />
=== Setting the default fragment number and buffer size in PulseAudio ===<br />
<br />
{{Poor writing|Copied from Linux mint topic with few additions}}<br />
<br />
==== Finding out your audio device parameters (1/4) ====<br />
<br />
To find out what your sound card buffering settings are:<br />
{{bc|<nowiki><br />
$ echo autospawn = no >> ~/.config/pulse/client.conf<br />
$ pulseaudio -k<br />
$ LANG=C timeout --foreground -k 10 -s kill 10 pulseaudio -vvvv 2>&1 | grep device.buffering -B 10<br />
$ sed -i '$d' ~/.config/pulse/client.conf<br />
</nowiki>}}<br />
<br />
{{Note|Pulseaudio may continue to automatically autospawn even with the setting above in place, because pulseaudio version 7 now uses socket activation through systemd. You can prevent that with $ systemctl --user mask pulseaudio.socket.<br />
<br />
For more information, see [https://wiki.archlinux.org/index.php/PulseAudio#Running Running]}}<br />
<br />
For each sound card detected by PulseAudio, you will see an output similar to:<br />
{{bc|<nowiki><br />
I: [pulseaudio] source.c: alsa.long_card_name = "HDA Intel at 0xfa200000 irq 46"<br />
I: [pulseaudio] source.c: alsa.driver_name = "snd_hda_intel"<br />
I: [pulseaudio] source.c: device.bus_path = "pci-0000:00:1b.0"<br />
I: [pulseaudio] source.c: sysfs.path = "/devices/pci0000:00/0000:00:1b.0/sound/card0"<br />
I: [pulseaudio] source.c: device.bus = "pci"<br />
I: [pulseaudio] source.c: device.vendor.id = "8086"<br />
I: [pulseaudio] source.c: device.vendor.name = "Intel Corporation"<br />
I: [pulseaudio] source.c: device.product.name = "82801I (ICH9 Family) HD Audio Controller"<br />
I: [pulseaudio] source.c: device.form_factor = "internal"<br />
I: [pulseaudio] source.c: device.string = "front:0"<br />
I: [pulseaudio] source.c: device.buffering.buffer_size = "768000"<br />
I: [pulseaudio] source.c: device.buffering.fragment_size = "384000"<br />
</nowiki>}}<br />
Take note the buffer_size and fragment_size values for the relevant sound card.<br />
<br />
==== Calculate your fragment size in msecs and number of fragments (2/4) ====<br />
<br />
PulseAudio's default sampling rate and bit depth are set to {{ic|44100Hz}} @ {{ic|16 bits}}.<br />
<br />
With this configuration, the bit rate we need is {{ic|44100}}*{{ic|16}} = {{ic|705600}} bits per second. That's {{ic|1411200 bps}} for stereo.<br />
<br />
Let's take a look at the parameters we have found in the previous step:<br />
<br />
device.buffering.buffer_size = "768000" => 768000/1411200 = 0.544217687075s = 544 msecs<br />
device.buffering.fragment_size = "384000" => 384000/1411200 = 0.272108843537s = 272 msecs<br />
<br />
==== Modify PulseAudio's configuration file (3/4) ====<br />
<br />
{{hc|/etc/pulse/daemon.conf|<nowiki><br />
; default-fragments = X<br />
; default-fragment-size-msec = Y<br />
</nowiki>}}<br />
<br />
In the previous step, we calculated the fragment size parameter.<br />
The number of fragments is simply buffer_size/fragment_size, which in this case ({{ic|544/272}}) is {{ic|2}}:<br />
<br />
{{hc|/etc/pulse/daemon.conf|2=<br />
; default-fragments = '''2'''<br />
; default-fragment-size-msec = '''272'''<br />
}}<br />
<br />
==== Restart the PulseAudio daemon (4/4) ====<br />
<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
For more information, see: [http://forums.linuxmint.com/viewtopic.php?f=42&t=44862 Linux Mint topic]<br />
<br />
=== Choppy sound with analog surround sound setup ===<br />
<br />
The low-frequency effects (LFE) channel is not remixed per default. To enable it the following needs to be set in {{ic|/etc/pulse/daemon.conf}} :<br />
{{hc|/etc/pulse/daemon.conf|<nowiki><br />
enable-lfe-remixing = yes<br />
</nowiki>}}<br />
<br />
=== Laggy sound ===<br />
<br />
This issue is due to incorrect buffer sizes. First verify that the variables {{ic|default-fragments}} and {{ic|default-fragment-size-msec}} are not being set to non default values in the file {{ic|/etc/pulse/daemon.conf}}. If the issue is still present, try setting them to the following values:<br />
<br />
{{hc|/etc/pulse/daemon.conf|2=<br />
default-fragments = 5<br />
default-fragment-size-msec = 2<br />
}}<br />
<br />
=== Choppy/distorted sound ===<br />
This can result from an incorrectly set sample rate. Try the following setting:<br />
<br />
{{hc|/etc/pulse/daemon.conf|2=<br />
default-sample-rate = 48000<br />
}}<br />
and restart the PulseAudio server.<br />
<br />
If one experiences choppy sound in applications using [[Wikipedia:OpenAL|OpenAL]], change the sample rate in {{ic|/etc/openal/alsoft.conf}}:<br />
{{hc|/etc/openal/alsoft.conf|2=<br />
frequency = 48000<br />
}}<br />
<br />
Setting the PCM volume above 0 dB can cause [[Wikipedia:Clipping_(audio)|clipping]]. Running {{ic|alsamixer}} will allow you to see if this is the problem and if so fix it. Note that ALSA may not [http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/PulseAudioStoleMyVolumes correctly export] the dB information to PulseAudio. Try the following:<br />
<br />
{{hc|/etc/pulse/default.pa|2=<br />
load-module module-udev-detect ignore_dB=1<br />
}}<br />
<br />
and restart the PulseAudio server. See also [[#No sound below a volume cutoff]].<br />
<br />
== Hardware and Cards ==<br />
<br />
=== No HDMI sound output after some time with the monitor turned off ===<br />
<br />
The monitor is connected via HDMI/DisplayPort, and the audio jack is plugged in the headphone jack of the monitor, but PulseAudio insists that it is unplugged:<br />
<br />
{{hc|pactl list sinks|<br />
...<br />
hdmi-output-0: HDMI / DisplayPort (priority: 5900, not available)<br />
...<br />
}}<br />
<br />
This leads to no sound coming from HDMI output. A workaround for this is to switch to another VT and back again. If that doesn't work, try: turn off your monitor, switch to another VT, turn on your monitor, and switch back. This problem has been reported by ATI/Nvidia/Intel users.<br />
<br />
=== No cards ===<br />
<br />
If PulseAudio starts, run {{ic|pacmd list}}. If no cards are reported, make sure that the ALSA devices are not in use:<br />
<br />
$ fuser -v /dev/snd/*<br />
$ fuser -v /dev/dsp<br />
<br />
Make sure any applications using the pcm or dsp files are shut down before restarting PulseAudio.<br />
<br />
=== Starting an application interrupts other app's sound ===<br />
<br />
If you have trouble with some applications (eg. Teamspeak, Mumble) interrupting sound output of already running applications (eg. Deadbeaf), you can solve this by commenting out the line {{ic|load-module module-role-cork}} in {{ic|/etc/pulse/default.pa}} like shown below:<br />
<br />
{{hc|/etc/pulse/default.pa|<br />
### Cork music/video streams when a phone stream is active<br />
# load-module module-role-cork<br />
}}<br />
<br />
Then restart pulseaudo by using your normal user account with<br />
<br />
pulseaudio -k<br />
pulseaudio --start<br />
<br />
=== The only device shown is "dummy output" or newly connected cards aren't detected ===<br />
<br />
This may be caused by settings in {{ic|~/.asoundrc}} overriding the system wide settings in {{ic|/etc/asound.conf}}. This can be prevented by commenting out the last line of {{ic|~/.asoundrc}} like so:<br />
<br />
{{hc|~/.asoundrc|<br />
# </home/''yourusername''/.asoundrc.asoundconf><br />
}}<br />
<br />
Maybe some program is monopolizing the audio device:<br />
<br />
{{hc|# fuser -v /dev/snd/*|<br />
USER PID ACCESS COMMAND<br />
/dev/snd/controlC0: root 931 F.... timidity<br />
bob 1195 F.... panel-6-mixer<br />
/dev/snd/controlC1: bob 1195 F.... panel-6-mixer<br />
bob 1215 F.... pulseaudio<br />
/dev/snd/pcmC0D0p: root 931 F...m timidity<br />
/dev/snd/seq: root 931 F.... timidity<br />
/dev/snd/timer: root 931 f.... timidity<br />
}}<br />
<br />
That means timidity blocks PulseAudio from accessing the audio devices. Just killing timidity will make the sound work again. <br />
<br />
If it doesn't help or you see nothing in the output, deleting the {{Pkg|timidity++}} package and restarting your system will help to get rid of the "dummy output".<br />
<br />
Another reason is [[FluidSynth]] conflicting with PulseAudio as discussed in [https://bbs.archlinux.org/viewtopic.php?id=154002 this thread]. One solution is to remove the package {{Pkg|fluidsynth}}.<br />
<br />
Alternatively you could modify the ''fluidsynth'' configuration file {{ic|/etc/conf.d/fluidsynth}} and change the driver to PulseAudio, then restart ''fluidsynth'' and PulseAudio:<br />
<br />
{{hc|/etc/conf.d/fluidsynth|<br />
output=AUDIO_DRIVER=pulseaudio<br />
OTHER_OPTS='-m alsa_seq-r 48000'<br />
}}<br />
<br />
=== No HDMI 5/7.1 Selection for Device ===<br />
<br />
If you are unable to select 5/7.1 channel output for a working HDMI device, then turning off "stream device reading" in {{ic|/etc/pulse/default.pa}} might help. <br />
<br />
See [[#Fallback device is not respected]].<br />
<br />
=== Failed to create sink input: sink is suspended ===<br />
<br />
If you do not have any output sound and receive dozens of errors related to a suspended sink in your {{ic|journalctl -b}} log, then backup first and then delete your user-specific pulse folders:<br />
<br />
$ rm -r ~/.pulse ~/.pulse-cookie ~/.config/pulse<br />
<br />
=== Simultaneous output to multiple sound cards / devices ===<br />
<br />
Simultaneous output to two different devices can be very useful. For example, being able to send audio to your A/V receiver via your graphics card's HDMI output, while also sending the same audio through the analogue output of your motherboard's built-in audio. This is much less hassle than it used to be (in this example, we are using GNOME desktop).<br />
<br />
Using {{Pkg|paprefs}}, simply select "Add virtual output device for simultaneous output on all local sound cards" from under the "Simultaneous Output" tab. Then, under GNOME's "sound settings", select the simultaneous output you have just created.<br />
<br />
If this doesn't work, try adding the following to {{ic|~/.asoundrc}}:<br />
<br />
pcm.dsp {<br />
type plug<br />
slave.pcm "dmix"<br />
}<br />
<br />
{{Tip|Simultaneous output can also be achieved manually using alsamixer. Disable "auto mute" item, then unmute other output sources you want to hear and increase their volume.}}<br />
<br />
=== Simultaneous output to multiple sinks on the same sound card not working ===<br />
<br />
This can be useful for users who have multiple sound sources and want to play them on different sinks/outputs. <br />
An example use-case for this would be if you play music and also voice chat and want to output music to speakers (in this case Digital S/PDIF) and voice to headphones. (Analog)<br />
<br />
This is sometimes auto detected by PulseAudio but not always. If you know that your sound card can output to both Analog and S/PDIF at the same time and PulseAudio does not have this option in it's profiles in pavucontrol, or veromix then you probably need to create a configuration file for your sound card.<br />
<br />
More in detail you need to create a profile-set for your specific sound card.<br />
This is done in two steps mostly.<br />
* Create udev rule to make PulseAudio choose your PulseAudio configuration file specific to the sound card.<br />
* Create the actual configuration.<br />
<br />
Create a pulseadio udev rule.<br />
<br />
{{Note|This is only an example for Asus Xonar Essence STX.<br />
Read [[udev]] to find out the correct values.}}<br />
<br />
{{Note|Your configuration file should have lower number than the original PulseAudio rule to take effect.}}<br />
<br />
{{hc|/usr/lib/udev/rules.d/90-pulseaudio-Xonar-STX.rules|<br />
ACTION&#61;&#61;"change", SUBSYSTEM&#61;&#61;"sound", KERNEL&#61;&#61;"card*", \<br />
ATTRS&#123;subsystem_vendor&#125;&#61;&#61;"0x1043", ATTRS&#123;subsystem_device&#125;&#61;&#61;"0x835c", ENV&#123;PULSE_PROFILE_SET&#125;&#61;"asus-xonar-essence-stx.conf" <br />
}}<br />
<br />
Now, create a configuration file. If you bother, you can start from scratch and make it saucy. However you can also use the default configuration file, rename it, and then add your profile there that you know works. Less pretty but also faster.<br />
<br />
To enable multiple sinks for Asus Xonar Essence STX you need only to add this in.<br />
<br />
{{Note|{{ic|asus-xonar-essence-stx.conf}} also includes all code/mappings from {{ic|default.conf}}.}}<br />
<br />
{{hc|/usr/share/pulseaudio/alsa-mixer/profile-sets/asus-xonar-essence-stx.conf|<br />
[Profile analog-stereo+iec958-stereo]<br />
description &#61; Analog Stereo Duplex + Digital Stereo Output<br />
input-mappings &#61; analog-stereo<br />
output-mappings &#61; analog-stereo iec958-stereo<br />
skip-probe &#61; yes<br />
}}<br />
<br />
This will auto-profile your Asus Xonar Essence STX with default profiles and add your own profile so you can have multiple sinks.<br />
<br />
You need to create another profile in the configuration file if you want to have the same functionality with AC3 Digital 5.1 output.<br />
<br />
[http://www.freedesktop.org/wiki/Software/PulseAudio/Backends/ALSA/Profiles/ See PulseAudio article about profiles]<br />
<br />
=== Some profiles like SPDIF are not enabled by default on the card ===<br />
<br />
Some profiles like IEC-958 (i.e. S/PDIF) may not be enabled by default on the selected sink. Each time the system starts up, the card profile is disabled and the pulseaudio daemon cannot select it.<br />
You have to add the profile selection to you default.pa file. <br />
Verify the card and profile name with :<br />
<br />
$ pacmd list-cards<br />
Then edit the config to add the profile<br />
{{hc|~/.config/pulse/default.pa|<br />
## Replace with your card name and the profile you want to activate<br />
set-card-profile alsa_card.pci-0000_00_1b.0 output:iec958-stereo+input:analog-stereo<br />
}}<br />
<br />
Pulse audio will add this profile the pool of available profiles<br />
<br />
== Bluetooth ==<br />
<br />
=== Disable Bluetooth support ===<br />
<br />
If you do not use Bluetooth, you may experience the following error in your journal:<br />
<br />
bluez5-util.c: GetManagedObjects() failed: org.freedesktop.DBus.Error.ServiceUnknown: The name org.bluez was not provided by any .service files<br />
<br />
To disable Bluetooth support in PulseAudio, make sure that the following lines are commented out in the configuration file in use ({{ic|~/.config/pulse/default.pa}} or {{ic|/etc/pulse/default.pa}}):<br />
<br />
{{hc|~/.config/pulse/default.pa|<br />
### Automatically load driver modules for Bluetooth hardware<br />
#.ifexists module-bluetooth-policy.so<br />
#load-module module-bluetooth-policy<br />
#.endif<br />
<br />
#.ifexists module-bluetooth-discover.so<br />
#load-module module-bluetooth-discover<br />
#.endif<br />
}}<br />
<br />
=== Bluetooth headset replay problems ===<br />
<br />
Some user [https://bbs.archlinux.org/viewtopic.php?id=117420 reports] huge delays or even no sound when the Bluetooth connection does not send any data. This is due to the {{ic|module-suspend-on-idle}} module, which automatically suspends sinks/sources on idle. As this can cause problems with headset, the responsible module can be deactivated.<br />
<br />
To disable loading of the {{ic|module-suspend-on-idle}} module, comment out the following line in the configuration file in use ({{ic|~/.config/pulse/default.pa}} or {{ic|/etc/pulse/default.pa}}):<br />
<br />
{{hc|~/.config/pulse/default.pa|<br />
### Automatically suspend sinks/sources that become idle for too long<br />
#load-module module-suspend-on-idle<br />
}}<br />
<br />
Finally restart PulseAudio to apply the changes.<br />
<br />
=== Automatically switch to Bluetooth or USB headset ===<br />
<br />
Add the following:<br />
{{hc|/etc/pulse/default.pa|<br />
# automatically switch to newly-connected devices<br />
load-module module-switch-on-connect<br />
}}<br />
<br />
=== My Bluetooth device is paired but does not play any sound ===<br />
<br />
[[Bluetooth#My_device_is_paired_but_no_sound_is_played_from_it|See the article in Bluetooth section]]<br />
<br />
Starting from PulseAudio 2.99 and bluez 4.101 you should '''avoid''' using Socket interface. Do NOT use:<br />
<br />
{{hc|/etc/bluetooth/audio.conf|<nowiki><br />
[General]<br />
Enable=Socket<br />
</nowiki>}}<br />
<br />
If you face problems with A2DP and PA 2.99 make sure you have {{Pkg|sbc}} library.<br />
<br />
== Applications ==<br />
<br />
=== Flash content ===<br />
<br />
Since Adobe Flash does not directly support PulseAudio, the recommended way is to [[PulseAudio#ALSA|configure ALSA to use the virtual PulseAudio sound card]].<br />
<br />
If Flash audio is lagging, you may try to have Flash access ALSA directly. See [[PulseAudio#ALSA/dmix without grabbing hardware device]] for details.<br />
<br />
=== Permission errors bug ===<br />
<br />
{{hc|pulseaudio --start|<br />
E: [autospawn] core-util.c: Failed to create secure directory (/run/user/1000/pulse): Operation not permitted<br />
W: [autospawn] lock-autospawn.c: Cannot access autospawn lock.<br />
E: [pulseaudio] main.c: Failed to acquire autospawn lock}}<br />
<br />
Known programs that changes permissions for {{ic|/run/user/''user id''/pulse}} when using [[Polkit]] for root elevation:<br />
<br />
*{{AUR|sakis3g}} <br />
<br />
As a workaround, include {{Pkg|gksu}} or {{Pkg|kdesu}} in a [[desktop entry]], or add {{ic|1=''username'' ALL=NOPASSWD: /usr/bin/''program_name''}} to [[sudoers]] to run it with {{Pkg|sudo}} or {{ic|gksudo}} without a password.<br />
<br />
The other workaround is to uncomment and set {{ic|1=daemonize = yes}} in the {{ic|/etc/pulse/daemon.conf}}.<br />
<br />
See also [https://bbs.archlinux.org/viewtopic.php?id=135955].<br />
<br />
=== Audacity ===<br />
<br />
When starting Audacity you may find that your headphones no longer work. This can be because Audacity is trying to use them as a recording device. To fix this, open Audacity, then set its recording device to {{ic|1=pulse:Internal Mic:0}}.<br />
<br />
Under some circumstances, playback may be distorted, very fast, or freeze, as discussed in the [http://wiki.audacityteam.org/wiki/Linux_Issues#ALSA_and_other_sound_systems Audacity Wiki's Linux Issues page].<br />
<br />
The solution proposed in this page may work: start Audacity with:<br />
<br />
$ env PULSE_LATENCY_MSEC=30 audacity<br />
<br />
If the solution above does not fix this issue, one may wish to temporarily disable pulseaudio while running Audacity by using the {{ic|pasuspender}} command:<br />
<br />
$ pasuspender -- audacity<br />
<br />
Then, be sure to select the appropriate ALSA input and output devices in Audacity.<br />
<br />
See also [[#Setting the default fragment number and buffer size in PulseAudio]].<br />
<br />
== Other Issues ==<br />
<br />
=== Bad configuration files ===<br />
<br />
After starting PulseAudio, if the system outputs no sound, it may be necessary to delete the contents of {{ic|~/.config/pulse}} and/or {{ic|~/.pulse}}. PulseAudio will automatically create new configuration files on its next start.<br />
<br />
=== Can't update configuration of sound device in pavucontrol ===<br />
<br />
{{Pkg|pavucontrol}} is a handy GUI utility for configuring PulseAudio. Under its 'Configuration' tab, you can select different profiles for each of your sound devices e.g. analogue stereo, digital output (IEC958), HDMI 5.1 Surround etc.<br />
<br />
However, you may run into an instance where selecting a different profile for a card results in the pulse daemon crashing and auto restarting without the new selection "sticking". If this occurs, use the other useful GUI tool, {{Pkg|paprefs}}, to check under the "Simultaneous Output" tab for a virtual simultaneous device. If this setting is active (checked), it will prevent you changing any card's profile in pavucontrol. Uncheck this setting, then adjust your profile in pavucontrol prior to re-enabling simultaneous output in paprefs.<br />
<br />
=== Failed to create sink input: sink is suspended ===<br />
<br />
If you do not have any output sound and receive dozens of errors related to a suspended sink in your {{ic|journalctl -b}} log, then backup first and then delete your user-specific pulse folders:<br />
<br />
$ rm -r ~/.pulse ~/.pulse-cookie ~/.config/pulse<br />
<br />
=== Pulse overwrites ALSA settings ===<br />
<br />
PulseAudio usually overwrites the ALSA settings — for example set with alsamixer — at start-up, even when the ALSA daemon is loaded. Since there seems to be no other way to restrict this behaviour, a workaround is to restore the ALSA settings again after PulseAudio has started. Add the following command to {{ic|.xinitrc}} or {{ic|.bash_profile}} or any other [[autostart]] file:<br />
<br />
restore_alsa() {<br />
while [ -z "$(pidof pulseaudio)" ]; do<br />
sleep 0.5<br />
done<br />
alsactl -f /var/lib/alsa/asound.state restore <br />
}<br />
restore_alsa &<br />
<br />
=== Prevent Pulse from restarting after being killed ===<br />
<br />
Sometimes you may wish to temporarily disable Pulse. In order to do so you will have to prevent Pulse from restarting after being killed.<br />
<br />
{{hc|~/.config/pulse/client.conf|2=<br />
# Disable autospawning the PulseAudio daemon<br />
autospawn = no<br />
}}<br />
<br />
=== Daemon startup failed ===<br />
<br />
Try resetting PulseAudio:<br />
<br />
$ rm -rf /tmp/pulse* ~/.pulse* ~/.config/pulse<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
* Check that options for sinks are set up correctly.<br />
<br />
* If you configured in default.pa to load and use the OSS modules then check with {{Pkg|lsof}} that {{ic|/dev/dsp}} device is not used by another application.<br />
<br />
* LXDE may have a problem with closing all applications after the user logged out to fix it look [[LXDM#Incorrect logout handling|Incorrect logout handling]].<br />
<br />
* Set a preferred working resample method. Use {{ic|pulseaudio --dump-resample-methods}} to see a list with all available resample methods you can use.<br />
<br />
* To get details about currently appeared unfixed errors or just get status of daemon use commands like {{ic|pax11publish -d}} and {{ic|pulseaudio -v}} where {{ic|v}} option can be used multiple time to set verbosity of log output equal to the {{ic|1=--log-level[=LEVEL]}} option where LEVEL is from 0 to 4. See the [[PulseAudio#Outputs by PulseAudio error status check utilities|Outputs by PulseAudio error status check utilities]] section.<br />
<br />
See also man pages for [http://linux.die.net/man/1/pax11publish pax11publish] and [http://linux.die.net/man/1/pulseaudio pulseaudio] for more details.<br />
<br />
==== Outputs by PulseAudio error status check utilities ====<br />
<br />
If the {{ic|pax11publish -d}} shows error like:<br />
<br />
N: [pulseaudio] main.c: User-configured server at "user", refusing to start/autospawn.<br />
<br />
then run {{ic|pax11publish -r}} command then could be also good to logout and login again. This manual cleanup is always required when using LXDM because it does not restart the X server on logout; see [[LXDM#PulseAudio]].<br />
<br />
If the {{ic|pulseaudio -vvvv}} command shows error like:<br />
<br />
E: [pulseaudio] module-udev-detect.c: You apparently ran out of inotify watches, probably because Tracker/Beagle took them all away. I wished people would do their homework first and fix inotify before using it for watching whole directory trees which is something the current inotify is certainly not useful for. Please make sure to drop the Tracker/Beagle guys a line complaining about their broken use of inotify.<br />
<br />
This can be resolved temporary by:<br />
$ echo 100000 > /proc/sys/fs/inotify/max_user_watches<br />
<br />
For permanent use save settings in the ''99-sysctl.conf'' file:<br />
<br />
{{hc|/etc/sysctl.d/99-sysctl.conf|2=<br />
# Increase inotify max watchs per user<br />
fs.inotify.max_user_watches = 100000}}<br />
<br />
{{Warning|It may cause much bigger consumption of memory by kernel.}}<br />
<br />
'''See also''' <br />
<br />
* [http://www.linuxinsight.com/proc_sys_fs_inotify.html proc_sys_fs_inotify] and [http://lwn.net/Articles/604686/ dnotify, inotify]- more details about ''inotify/max_user_watches''<br />
* [http://stackoverflow.com/questions/535768/what-is-a-reasonable-amount-of-inotify-watches-with-linux?answertab=votes#tab-top reasonable amount of inotify watches with Linux]<br />
* [http://linux.die.net/man/7/inotify inotify] - man page<br />
<br />
=== Daemon already running ===<br />
<br />
On some systems, PulseAudio may be started multiple times. journalctl will report:<br />
<br />
[pulseaudio] pid.c: Daemon already running.<br />
<br />
Make sure to use only one method of autostarting applications. {{Pkg|pulseaudio}} includes these files:<br />
<br />
* {{ic|/etc/X11/xinit/xinitrc.d/pulseaudio}}<br />
* {{ic|/etc/xdg/autostart/pulseaudio.desktop}}<br />
* {{ic|/etc/xdg/autostart/pulseaudio-kde.desktop}}<br />
<br />
Also check user autostart files and directories, such as [[xinitrc]], {{ic|~/.config/autostart/}} etc.<br />
<br />
=== Subwoofer stops working after end of every song ===<br />
<br />
Known issue: https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/494099<br />
<br />
To fix this, must edit: {{ic|/etc/pulse/daemon.conf}} and enable {{ic|enable-lfe-remixing}} :<br />
{{hc|/etc/pulse/daemon.conf|<nowiki><br />
enable-lfe-remixing = yes<br />
</nowiki>}}<br />
<br />
=== Unable to select surround configuration other than "Surround 4.0" ===<br />
<br />
If you're unable to set 5.1 surround output in pavucontrol because it only shows "Analog Surround 4.0 Output", open the ALSA mixer and change the output configuration there to 6 channels. Then restart pulseaudio, and pavucontrol will list many more options.<br />
<br />
=== Realtime scheduling ===<br />
<br />
If rtkit does not work, you can manually set up your system to run PulseAudio with real-time scheduling, which can help performance. To do this, add the following lines to {{ic|/etc/security/limits.conf}}:<br />
<br />
@pulse-rt - rtprio 9<br />
@pulse-rt - nice -11<br />
<br />
Afterwards, you need to add your user to the {{ic|pulse-rt}} group:<br />
<br />
# gpasswd -a <user> pulse-rt<br />
<br />
=== pactl "invalid option" error with negative percentage arguments ===<br />
<br />
{{ic|pactl}} commands that take negative percentage arguments will fail with an 'invalid option' error. Use the standard shell '--' pseudo argument<br />
to disable argument parsing before the negative argument. ''e.g.'' {{ic|pactl set-sink-volume 1 -- -5%}}.<br />
<br />
=== Fallback device is not respected ===<br />
<br />
PulseAudio does not have a true default device. Instead it uses a [http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/DefaultDevice/ "fallback"], which only applies to new sound streams. This means previously run applications are not affected by the newly set fallback device.<br />
<br />
{{Pkg|gnome-control-center}}, {{Pkg|mate-media-pulseaudio}}{{Broken package link|replaced by {{Pkg|mate-media}}}} and {{AUR|paswitch}} handle this gracefully. Alternatively: <br />
<br />
1. Move the old streams in {{Pkg|pavucontrol}} manually to the new sound card.<br />
<br />
2. Stop Pulse, erase the "stream-volumes" in {{ic|~/.config/pulse}} and/or {{ic|~/.pulse}} and restart Pulse. This also resets application volumes.<br />
<br />
3. Disable stream device reading. This may be not wanted when using different soundcards with different applications.<br />
<br />
{{hc|/etc/pulse/default.pa|<nowiki><br />
load-module module-stream-restore restore_device=false<br />
</nowiki>}}</div>Mousemanhttps://wiki.archlinux.org/index.php?title=PulseAudio/Troubleshooting&diff=415440PulseAudio/Troubleshooting2016-01-15T08:29:24Z<p>Mouseman: /* Setting the default fragment number and buffer size in PulseAudio */</p>
<hr />
<div>[[Category:Sound]]<br />
[[it:PulseAudio/Troubleshooting]]<br />
[[ja:PulseAudio/トラブルシューティング]]<br />
[[ru:PulseAudio/Troubleshooting]]<br />
See [[PulseAudio]] for the main article.<br />
<br />
== Volume ==<br />
<br />
Here you will find some hints on volume issues and why you may not hear anything.<br />
<br />
=== Auto-Mute Mode ===<br />
<br />
Auto-Mute Mode may be enabled. It can be disabled using {{ic|alsamixer}}.<br />
<br />
See http://superuser.com/questions/431079/how-to-disable-auto-mute-mode for more.<br />
<br />
To save your current settings as the default options, run {{ic|alsactl store}} as root.<br />
<br />
=== Muted audio device ===<br />
<br />
If one experiences no audio output via any means while using [[ALSA]], attempt to unmute the sound card. To do this, launch {{ic|alsamixer}} and make sure each column has a green {{ic|00}} under it (this can be toggled by pressing {{ic|m}}):<br />
<br />
$ alsamixer -c 0<br />
<br />
{{Note|alsamixer will not tell you which output device is set as the default. One possible cause of no sound after install is that PulseAudio detects the wrong output device as a default. Install {{Pkg|pavucontrol}} and check if there is any output on the pavucontrol panel when playing a ''.wav'' file.}}<br />
<br />
=== Muted application ===<br />
<br />
If a specific application is muted or low while all else seems to be in order, it may be due to individual {{ic|sink-input}} settings. With the offending application playing audio, run:<br />
<br />
$ pacmd list-sink-inputs<br />
<br />
Find and make note of the {{ic|index}} of the corresponding {{ic|sink input}}. The {{ic|properties:}} {{ic|application.name}} and {{ic|application.process.binary}}, among others, should help here. Ensure sane settings are present, specifically those of {{ic|muted}} and {{ic|volume}}.<br />
If the sink is muted, it can be unmuted by:<br />
<br />
$ pacmd set-sink-input-mute <index> false<br />
<br />
If the volume needs adjusting, it can be set to 100% by:<br />
<br />
$ pacmd set-sink-input-volume <index> 0x10000<br />
<br />
{{Note|If {{ic|pacmd}} reports {{ic|0 sink input(s)}}, double-check that the application is playing audio. If it is still absent, verify that other applications show up as sink inputs.}}<br />
<br />
=== Volume adjustment does not work properly ===<br />
<br />
Check:<br />
{{ic|/usr/share/pulseaudio/alsa-mixer/paths/analog-output.conf.common}}<br />
<br />
If the volume does not appear to increment/decrement properly using {{ic|alsamixer}} or {{ic|amixer}}, it may be due to PulseAudio having a larger number of increments (65537 to be exact). Try using larger values when changing volume (e.g. {{ic|amixer set Master 655+}}).<br />
<br />
=== Per-application volumes change when the Master volume is adjusted ===<br />
<br />
This is because PulseAudio uses flat volumes by default, instead of relative volumes, relative to an absolute master volume. If this is found to be inconvenient, asinine, or otherwise undesireable, relative volumes can be enabled by disabling flat volumes in the PulseAudio daemon's configuration file:<br />
<br />
{{hc|/etc/pulse/daemon.conf or ~/.config/pulse/daemon.conf|<nowiki><br />
flat-volumes = no<br />
</nowiki>}}<br />
<br />
and then restarting PulseAudio by executing<br />
<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
=== Volume gets louder every time a new application is started ===<br />
<br />
Per default, it seems as if changing the volume in an application sets the global system volume to that level instead of only affecting the respective application. Applications setting their volume on startup will therefore cause the system volume to "jump".<br />
<br />
Fix this by disabling flat volumes, as demonstrated in the previous section. When Pulse comes back after a few seconds, applications will not alter the global system volume anymore but have their own volume level again.<br />
<br />
{{Note|A previously installed and removed pulseaudio-equalizer may leave behind remnants of the setup in {{ic|~/.config/pulse/default.pa}} or {{ic|~/.pulse/default.pa}} which can also cause maximized volume trouble. Comment that out as needed.}}<br />
<br />
=== Sound output is only mono on M-Audio Audiophile 2496 sound card ===<br />
<br />
Add the following:<br />
<br />
{{hc|/etc/pulseaudio/default.pa|<nowiki><br />
load-module module-alsa-sink sink_name=delta_out device=hw:M2496 format=s24le channels=10 channel_map=left,right,aux0,aux1,aux2,aux3,aux4,aux5,aux6,aux7<br />
load-module module-alsa-source source_name=delta_in device=hw:M2496 format=s24le channels=12 channel_map=left,right,aux0,aux1,aux2,aux3,aux4,aux5,aux6,aux7,aux8,aux9<br />
set-default-sink delta_out<br />
set-default-source delta_in<br />
</nowiki>}}<br />
<br />
=== No sound below a volume cutoff ===<br />
<br />
Known issue (won't fix): https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/223133<br />
<br />
If sound does not play when PulseAudio's volume is set below a certain level, try setting {{ic|1=ignore_dB=1}} in {{ic|/etc/pulse/default.pa}}:<br />
{{hc|/etc/pulse/default.pa|<nowiki><br />
load-module module-udev-detect ignore_dB=1<br />
</nowiki>}}<br />
<br />
However, be aware that it may cause another bug preventing PulseAudio to unmute speakers when headphones or other audio devices are unplugged.<br />
<br />
=== Low volume for internal microphone ===<br />
<br />
If you experience low volume on internal notebook microphone, try setting:<br />
{{hc|/etc/pulse/default.pa|<nowiki><br />
set-source-volume 1 300000<br />
</nowiki>}}<br />
<br />
=== Clients alter master output volume (a.k.a. volume jumps to 100% after running application) ===<br />
<br />
If changing the volume in specific applications or simply running an application changes the master output volume this is likely due to flat volumes mode of pulseaudio. Before disabling it, KDE users should try lowering their system notifications volume in ''System Settings -> Application and System Notifications -> Manage Notifications'' under the ''Player Settings'' tab to something reasonable. Changing the ''Event Sounds'' volume in KMix or another volume mixer application will not help here. This should make the flat-volumes mode work out as intended, if it does not work, some other application is likely requesting 100% volume when its playing something. If all else fails, you can try to disable flat-volumes:<br />
<br />
{{hc|/etc/pulse/daemon.conf|<nowiki><br />
flat-volumes = no<br />
</nowiki>}}<br />
<br />
Then restart PulseAudio daemon:<br />
<br />
# pulseaudio -k<br />
# pulseaudio --start<br />
<br />
=== No sound after resume from suspend ===<br />
<br />
If audio generally works, but stops after resume from suspend, try "reloading" PulseAudio by executing:<br />
$ /usr/bin/pasuspender /bin/true<br />
<br />
This is better than completely killing and restarting it ({{ic|pulseaudio -k}} followed by {{ic|pulseaudio --start}}), because it doesn't break already running applications.<br />
<br />
If the above fixes your problem, you may wish to automate it, by creating a systemd service file.<br />
<br />
1. Create the template service file in {{ic|/etc/systemd/system/resume-fix-pulseaudio@.service}}:<br />
<br />
[Unit]<br />
Description=Fix PulseAudio after resume from suspend<br />
After=suspend.target<br />
<br />
[Service]<br />
User=%I<br />
Type=oneshot<br />
Environment="XDG_RUNTIME_DIR=/run/user/%U"<br />
ExecStart=/usr/bin/pasuspender /bin/true<br />
<br />
[Install]<br />
WantedBy=suspend.target<br />
<br />
2. Enable it for your user account<br />
<br />
# systemctl enable resume-fix-pulseaudio@YOUR_USERNAME_HERE.service<br />
<br />
3. Reload systemd<br />
<br />
# systemctl --system daemon-reload<br />
<br />
=== ALSA channels mute when headphones are plugged/unplugged improperly ===<br />
<br />
If when you unplug your headphones or plug them in the audio remains muted in alsamixer on the wrong channel due to it being set to 0%, you may be able to fix it by opening {{ic|/etc/pulse/default.pa}} and commenting out the line:<br />
<br />
load-module module-switch-on-port-available<br />
<br />
== Microphone ==<br />
<br />
=== Microphone not detected by PulseAudio ===<br />
<br />
Determine the card and device number of your mic:<br />
<br />
$ arecord -l<br />
**** List of CAPTURE Hardware Devices ****<br />
card 0: PCH [HDA Intel PCH], device 0: ALC269VC Analog [ALC269VC Analog]<br />
Subdevices: 1/1<br />
Subdevice #0: subdevice #0<br />
<br />
In hw:CARD,DEVICE notation, you would specify the above device as {{ic|hw:0,0}}.<br />
<br />
Then, edit {{ic|/etc/pulse/default.pa}} and insert a {{ic|load-module}} line specifying your device as follows:<br />
<br />
load-module module-alsa-source device=hw:0,0<br />
# the line above should be somewhere before the line below<br />
.ifexists module-udev-detect.so<br />
<br />
Finally, restart pulseaudio to apply the new settings:<br />
<br />
$ pulseaudio -k ; pulseaudio -D<br />
<br />
If everything worked correctly, you should now see your mic show up when running {{ic|pavucontrol}} (under the {{ic|Input Devices}} tab).<br />
<br />
=== PulseAudio uses wrong microphone ===<br />
<br />
If PulseAudio uses the wrong microphone, and changing the Input Device with Pavucontrol did not help, take a look at alsamixer. It seems that Pavucontrol does not always set the input source correctly.<br />
<br />
$ alsamixer<br />
<br />
Press {{ic|F6}} and choose your sound card, e.g. HDA Intel. Now press {{ic|F5}} to display all items. Try to find the item: {{ic|Input Source}}. With the up/down arrow keys you are able to change the input source.<br />
<br />
Now try if the correct microphone is used for recording.<br />
<br />
=== No microphone on ThinkPad T400/T500/T420 ===<br />
<br />
Run:<br />
<br />
alsamixer -c 0<br />
<br />
Unmute and maximize the volume of the "Internal Mic".<br />
<br />
Once you see the device with:<br />
<br />
arecord -l<br />
<br />
you might still need to adjust the settings. The microphone and the audio jack are duplexed. Set the configuration of the internal audio in pavucontrol to ''Analog Stereo Duplex''.<br />
<br />
=== No microphone input on Acer Aspire One ===<br />
<br />
Install pavucontrol, unlink the microphone channels and turn down the left one to 0.<br />
Reference: http://getsatisfaction.com/jolicloud/topics/deaf_internal_mic_on_acer_aspire_one#reply_2108048<br />
<br />
=== Static noise in microphone recording ===<br />
<br />
If we are getting static noise in Skype, gnome-sound-recorder, arecord, etc.'s recordings, then the sound card sample rate is incorrect. That is why there is static noise in Linux microphone recordings. To fix this, we need to set the sampling rate in {{ic|/etc/pulse/daemon.conf}} for the sound hardware.<br />
<br />
==== Determine sound cards in the system (1/5) ====<br />
<br />
This requires {{Pkg|alsa-utils}} and related packages to be installed:<br />
{{hc|$ arecord --list-devices|<br />
**** List of CAPTURE Hardware Devices ****<br />
card 0: Intel [HDA Intel], device 0: ALC888 Analog [ALC888 Analog]<br />
Subdevices: 1/1<br />
Subdevice #0: subdevice #0<br />
card 0: Intel [HDA Intel], device 2: ALC888 Analog [ALC888 Analog]<br />
Subdevices: 1/1<br />
Subdevice #0: subdevice #0<br />
}}<br />
<br />
Sound card is {{ic|hw:0,0}}.<br />
<br />
==== Determine sampling rate of the sound card (2/5) ====<br />
<br />
{{hc|1=arecord -f dat -r 60000 -D hw:0,0 -d 5 test.wav|2=<br />
"Recording WAVE 'test.wav' : Signed 16 bit Little Endian, Rate 60000 Hz, Stereo<br />
Warning: rate is not accurate (requested = 60000Hz, '''got = 96000Hz''')<br />
please, try the plug plugin<br />
}}<br />
<br />
observe, the {{ic|1=got = 96000Hz}}. This is the maximum sampling rate of our card.<br />
<br />
==== Setting the sound card's sampling rate into PulseAudio configuration (3/5) ====<br />
<br />
The default sampling rate in PulseAudio:<br />
{{hc|1=$ grep "default-sample-rate" /etc/pulse/daemon.conf|2=<br />
; default-sample-rate = 44100<br />
}}<br />
<br />
{{ic|44100}} is disabled and needs to be changed to {{ic|96000}}:<br />
# sed 's/; default-sample-rate = 44100/default-sample-rate = 96000/g' -i /etc/pulse/daemon.conf<br />
<br />
==== Restart PulseAudio to apply the new settings (4/5) ====<br />
<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
==== Finally check by recording and playing it back (5/5) ====<br />
<br />
Let us record some voice using a microphone for, say, 10 seconds. Make sure the microphone is not muted and all<br />
<br />
$ arecord -f cd -d 10 test-mic.wav<br />
<br />
After 10 seconds, let us play the recording...<br />
<br />
$ aplay test-mic.wav<br />
<br />
Now hopefully, there is no static noise in microphone recording anymore.<br />
<br />
=== No microphone on Steam or Skype with enable-remixing = no ===<br />
<br />
When you set {{ic|1=enable-remixing = no}} on {{ic|/etc/pulse/daemon.conf}} you may find that your microphone has stopped working on certain applications like Skype or Steam. This happens because these applications capture the microphone as mono only and because remixing is disabled, Pulseaudio will no longer remix your stereo microphone to mono.<br />
<br />
To fix this you need to tell Pulseaudio to do this for you:<br />
<br />
1. Find the name of the source <br />
<br />
# pacmd list-sources<br />
<br />
Example output edited for brevity, the name you need is in bold:<br />
<br />
index: 2<br />
name: <'''alsa_input.pci-0000_00_14.2.analog-stereo'''><br />
driver: <module-alsa-card.c><br />
flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY<br />
<br />
2. Add a remap rule to {{ic|/etc/pulse/default.pa}}, use the name you found with the previous command, here we will use '''alsa_input.pci-0000_00_14.2.analog-stereo''' as an example:<br />
<br />
{{hc|/etc/pulse/default.pa|<nowiki><br />
### Remap microphone to mono<br />
load-module module-remap-source master=alsa_input.pci-0000_00_14.2.analog-stereo master_channel_map=front-left,front-right channels=2 channel_map=mono,mono<br />
</nowiki>}}<br />
<br />
3. Restart Pulseaudio<br />
<br />
# pulseaudio -k<br />
<br />
{{Note|Pulseaudio may fail to start if you don't exit a program that was using the microphone (e.g. if you tested on Steam before modifying the file), in which case you should exit the application and manually start Pulseaudio}}<br />
<br />
# pulseaudio --start<br />
<br />
== Audio quality ==<br />
<br />
=== Enable Echo/Noise-Cancelation ===<br />
<br />
Arch doesn't load the Pulseaudio Echo-Cancelation module by default, therefore, we have to add it in {{ic|/etc/pulse/default.pa}}. First you can test if the module is present with {{ic|pacmd}} and entering {{ic|list-modules}}. If you can't find a line showing {{ic|name: <module-echo-cancel>}} you have to add <br />
<br />
{{hc|/etc/pulse/default.pa|<br />
### Enable Echo/Noise-Cancelation<br />
load-module module-echo-cancel<br />
}}<br />
<br />
then restart Pulseaudio<br />
<br />
pulseaudio -k<br />
pulseaudio --start<br />
<br />
and check if the module is activated by starting {{ic|pavucontrol}}. Under {{ic|Recoding}} the input device should show {{ic|Echo-Cancel Source Stream from"}}<br />
<br />
=== Glitches, skips or crackling ===<br />
<br />
The newer implementation of the PulseAudio sound server uses timer-based audio scheduling instead of the traditional, interrupt-driven approach. <br />
<br />
Timer-based scheduling may expose issues in some ALSA drivers. On the other hand, other drivers might be glitchy without it on, so check to see what works on your system. <br />
<br />
To turn timer-based scheduling off add {{ic|1=tsched=0}} in {{ic|/etc/pulse/default.pa}}:<br />
{{hc|/etc/pulse/default.pa|2=<br />
load-module module-udev-detect tsched=0<br />
}}<br />
<br />
Then restart the PulseAudio server:<br />
<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
Do the reverse to enable timer-based scheduling, if not already enabled by default.<br />
<br />
If you are using Intel's [[Wikipedia:IOMMU|IOMMU]] and experience glitches and/or skips, add {{ic|1=intel_iommu=igfx_off}} to your kernel command line.<br />
<br />
Some Intel audio cards using the {{ic|snd-hda-intel}} module need the otions {{ic|1=vid=8086 pid=8ca0 snoop=0}}. In order to set them permanently, create/modify the following file including the line below.<br />
{{hc|/etc/modprobe.d/sound.conf|2=<br />
options snd-hda-intel vid=8086 pid=8ca0 snoop=0<br />
}}<br />
<br />
Please report any such cards to [http://www.freedesktop.org/wiki/Software/PulseAudio/Backends/ALSA/BrokenDrivers/ PulseAudio Broken Sound Driver page]<br />
<br />
=== Setting the default fragment number and buffer size in PulseAudio ===<br />
<br />
{{Poor writing|Copied from Linux mint topic with few additions}}<br />
<br />
==== Finding out your audio device parameters (1/4) ====<br />
<br />
To find out what your sound card buffering settings are:<br />
{{bc|<nowiki><br />
$ echo autospawn = no >> ~/.config/pulse/client.conf<br />
$ pulseaudio -k<br />
$ LANG=C timeout --foreground -k 10 -s kill 10 pulseaudio -vvvv 2>&1 | grep device.buffering -B 10<br />
$ sed -i '$d' ~/.config/pulse/client.conf<br />
</nowiki>}}<br />
<br />
{{Note|Pulseaudio may continue to automatically autospawn even with the setting above in place, because pulseaudio version 7 now uses socket activation through systemd. You can prevent that with $ systemctl --user mask pulseaudio.socket. For more information, see [https://wiki.archlinux.org/index.php/PulseAudio#Running Running]}}<br />
<br />
For each sound card detected by PulseAudio, you will see an output similar to:<br />
{{bc|<nowiki><br />
I: [pulseaudio] source.c: alsa.long_card_name = "HDA Intel at 0xfa200000 irq 46"<br />
I: [pulseaudio] source.c: alsa.driver_name = "snd_hda_intel"<br />
I: [pulseaudio] source.c: device.bus_path = "pci-0000:00:1b.0"<br />
I: [pulseaudio] source.c: sysfs.path = "/devices/pci0000:00/0000:00:1b.0/sound/card0"<br />
I: [pulseaudio] source.c: device.bus = "pci"<br />
I: [pulseaudio] source.c: device.vendor.id = "8086"<br />
I: [pulseaudio] source.c: device.vendor.name = "Intel Corporation"<br />
I: [pulseaudio] source.c: device.product.name = "82801I (ICH9 Family) HD Audio Controller"<br />
I: [pulseaudio] source.c: device.form_factor = "internal"<br />
I: [pulseaudio] source.c: device.string = "front:0"<br />
I: [pulseaudio] source.c: device.buffering.buffer_size = "768000"<br />
I: [pulseaudio] source.c: device.buffering.fragment_size = "384000"<br />
</nowiki>}}<br />
Take note the buffer_size and fragment_size values for the relevant sound card.<br />
<br />
==== Calculate your fragment size in msecs and number of fragments (2/4) ====<br />
<br />
PulseAudio's default sampling rate and bit depth are set to {{ic|44100Hz}} @ {{ic|16 bits}}.<br />
<br />
With this configuration, the bit rate we need is {{ic|44100}}*{{ic|16}} = {{ic|705600}} bits per second. That's {{ic|1411200 bps}} for stereo.<br />
<br />
Let's take a look at the parameters we have found in the previous step:<br />
<br />
device.buffering.buffer_size = "768000" => 768000/1411200 = 0.544217687075s = 544 msecs<br />
device.buffering.fragment_size = "384000" => 384000/1411200 = 0.272108843537s = 272 msecs<br />
<br />
==== Modify PulseAudio's configuration file (3/4) ====<br />
<br />
{{hc|/etc/pulse/daemon.conf|<nowiki><br />
; default-fragments = X<br />
; default-fragment-size-msec = Y<br />
</nowiki>}}<br />
<br />
In the previous step, we calculated the fragment size parameter.<br />
The number of fragments is simply buffer_size/fragment_size, which in this case ({{ic|544/272}}) is {{ic|2}}:<br />
<br />
{{hc|/etc/pulse/daemon.conf|2=<br />
; default-fragments = '''2'''<br />
; default-fragment-size-msec = '''272'''<br />
}}<br />
<br />
==== Restart the PulseAudio daemon (4/4) ====<br />
<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
For more information, see: [http://forums.linuxmint.com/viewtopic.php?f=42&t=44862 Linux Mint topic]<br />
<br />
=== Choppy sound with analog surround sound setup ===<br />
<br />
The low-frequency effects (LFE) channel is not remixed per default. To enable it the following needs to be set in {{ic|/etc/pulse/daemon.conf}} :<br />
{{hc|/etc/pulse/daemon.conf|<nowiki><br />
enable-lfe-remixing = yes<br />
</nowiki>}}<br />
<br />
=== Laggy sound ===<br />
<br />
This issue is due to incorrect buffer sizes. First verify that the variables {{ic|default-fragments}} and {{ic|default-fragment-size-msec}} are not being set to non default values in the file {{ic|/etc/pulse/daemon.conf}}. If the issue is still present, try setting them to the following values:<br />
<br />
{{hc|/etc/pulse/daemon.conf|2=<br />
default-fragments = 5<br />
default-fragment-size-msec = 2<br />
}}<br />
<br />
=== Choppy/distorted sound ===<br />
This can result from an incorrectly set sample rate. Try the following setting:<br />
<br />
{{hc|/etc/pulse/daemon.conf|2=<br />
default-sample-rate = 48000<br />
}}<br />
and restart the PulseAudio server.<br />
<br />
If one experiences choppy sound in applications using [[Wikipedia:OpenAL|OpenAL]], change the sample rate in {{ic|/etc/openal/alsoft.conf}}:<br />
{{hc|/etc/openal/alsoft.conf|2=<br />
frequency = 48000<br />
}}<br />
<br />
Setting the PCM volume above 0 dB can cause [[Wikipedia:Clipping_(audio)|clipping]]. Running {{ic|alsamixer}} will allow you to see if this is the problem and if so fix it. Note that ALSA may not [http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/PulseAudioStoleMyVolumes correctly export] the dB information to PulseAudio. Try the following:<br />
<br />
{{hc|/etc/pulse/default.pa|2=<br />
load-module module-udev-detect ignore_dB=1<br />
}}<br />
<br />
and restart the PulseAudio server. See also [[#No sound below a volume cutoff]].<br />
<br />
== Hardware and Cards ==<br />
<br />
=== No HDMI sound output after some time with the monitor turned off ===<br />
<br />
The monitor is connected via HDMI/DisplayPort, and the audio jack is plugged in the headphone jack of the monitor, but PulseAudio insists that it is unplugged:<br />
<br />
{{hc|pactl list sinks|<br />
...<br />
hdmi-output-0: HDMI / DisplayPort (priority: 5900, not available)<br />
...<br />
}}<br />
<br />
This leads to no sound coming from HDMI output. A workaround for this is to switch to another VT and back again. If that doesn't work, try: turn off your monitor, switch to another VT, turn on your monitor, and switch back. This problem has been reported by ATI/Nvidia/Intel users.<br />
<br />
=== No cards ===<br />
<br />
If PulseAudio starts, run {{ic|pacmd list}}. If no cards are reported, make sure that the ALSA devices are not in use:<br />
<br />
$ fuser -v /dev/snd/*<br />
$ fuser -v /dev/dsp<br />
<br />
Make sure any applications using the pcm or dsp files are shut down before restarting PulseAudio.<br />
<br />
=== Starting an application interrupts other app's sound ===<br />
<br />
If you have trouble with some applications (eg. Teamspeak, Mumble) interrupting sound output of already running applications (eg. Deadbeaf), you can solve this by commenting out the line {{ic|load-module module-role-cork}} in {{ic|/etc/pulse/default.pa}} like shown below:<br />
<br />
{{hc|/etc/pulse/default.pa|<br />
### Cork music/video streams when a phone stream is active<br />
# load-module module-role-cork<br />
}}<br />
<br />
Then restart pulseaudo by using your normal user account with<br />
<br />
pulseaudio -k<br />
pulseaudio --start<br />
<br />
=== The only device shown is "dummy output" or newly connected cards aren't detected ===<br />
<br />
This may be caused by settings in {{ic|~/.asoundrc}} overriding the system wide settings in {{ic|/etc/asound.conf}}. This can be prevented by commenting out the last line of {{ic|~/.asoundrc}} like so:<br />
<br />
{{hc|~/.asoundrc|<br />
# </home/''yourusername''/.asoundrc.asoundconf><br />
}}<br />
<br />
Maybe some program is monopolizing the audio device:<br />
<br />
{{hc|# fuser -v /dev/snd/*|<br />
USER PID ACCESS COMMAND<br />
/dev/snd/controlC0: root 931 F.... timidity<br />
bob 1195 F.... panel-6-mixer<br />
/dev/snd/controlC1: bob 1195 F.... panel-6-mixer<br />
bob 1215 F.... pulseaudio<br />
/dev/snd/pcmC0D0p: root 931 F...m timidity<br />
/dev/snd/seq: root 931 F.... timidity<br />
/dev/snd/timer: root 931 f.... timidity<br />
}}<br />
<br />
That means timidity blocks PulseAudio from accessing the audio devices. Just killing timidity will make the sound work again. <br />
<br />
If it doesn't help or you see nothing in the output, deleting the {{Pkg|timidity++}} package and restarting your system will help to get rid of the "dummy output".<br />
<br />
Another reason is [[FluidSynth]] conflicting with PulseAudio as discussed in [https://bbs.archlinux.org/viewtopic.php?id=154002 this thread]. One solution is to remove the package {{Pkg|fluidsynth}}.<br />
<br />
Alternatively you could modify the ''fluidsynth'' configuration file {{ic|/etc/conf.d/fluidsynth}} and change the driver to PulseAudio, then restart ''fluidsynth'' and PulseAudio:<br />
<br />
{{hc|/etc/conf.d/fluidsynth|<br />
output=AUDIO_DRIVER=pulseaudio<br />
OTHER_OPTS='-m alsa_seq-r 48000'<br />
}}<br />
<br />
=== No HDMI 5/7.1 Selection for Device ===<br />
<br />
If you are unable to select 5/7.1 channel output for a working HDMI device, then turning off "stream device reading" in {{ic|/etc/pulse/default.pa}} might help. <br />
<br />
See [[#Fallback device is not respected]].<br />
<br />
=== Failed to create sink input: sink is suspended ===<br />
<br />
If you do not have any output sound and receive dozens of errors related to a suspended sink in your {{ic|journalctl -b}} log, then backup first and then delete your user-specific pulse folders:<br />
<br />
$ rm -r ~/.pulse ~/.pulse-cookie ~/.config/pulse<br />
<br />
=== Simultaneous output to multiple sound cards / devices ===<br />
<br />
Simultaneous output to two different devices can be very useful. For example, being able to send audio to your A/V receiver via your graphics card's HDMI output, while also sending the same audio through the analogue output of your motherboard's built-in audio. This is much less hassle than it used to be (in this example, we are using GNOME desktop).<br />
<br />
Using {{Pkg|paprefs}}, simply select "Add virtual output device for simultaneous output on all local sound cards" from under the "Simultaneous Output" tab. Then, under GNOME's "sound settings", select the simultaneous output you have just created.<br />
<br />
If this doesn't work, try adding the following to {{ic|~/.asoundrc}}:<br />
<br />
pcm.dsp {<br />
type plug<br />
slave.pcm "dmix"<br />
}<br />
<br />
{{Tip|Simultaneous output can also be achieved manually using alsamixer. Disable "auto mute" item, then unmute other output sources you want to hear and increase their volume.}}<br />
<br />
=== Simultaneous output to multiple sinks on the same sound card not working ===<br />
<br />
This can be useful for users who have multiple sound sources and want to play them on different sinks/outputs. <br />
An example use-case for this would be if you play music and also voice chat and want to output music to speakers (in this case Digital S/PDIF) and voice to headphones. (Analog)<br />
<br />
This is sometimes auto detected by PulseAudio but not always. If you know that your sound card can output to both Analog and S/PDIF at the same time and PulseAudio does not have this option in it's profiles in pavucontrol, or veromix then you probably need to create a configuration file for your sound card.<br />
<br />
More in detail you need to create a profile-set for your specific sound card.<br />
This is done in two steps mostly.<br />
* Create udev rule to make PulseAudio choose your PulseAudio configuration file specific to the sound card.<br />
* Create the actual configuration.<br />
<br />
Create a pulseadio udev rule.<br />
<br />
{{Note|This is only an example for Asus Xonar Essence STX.<br />
Read [[udev]] to find out the correct values.}}<br />
<br />
{{Note|Your configuration file should have lower number than the original PulseAudio rule to take effect.}}<br />
<br />
{{hc|/usr/lib/udev/rules.d/90-pulseaudio-Xonar-STX.rules|<br />
ACTION&#61;&#61;"change", SUBSYSTEM&#61;&#61;"sound", KERNEL&#61;&#61;"card*", \<br />
ATTRS&#123;subsystem_vendor&#125;&#61;&#61;"0x1043", ATTRS&#123;subsystem_device&#125;&#61;&#61;"0x835c", ENV&#123;PULSE_PROFILE_SET&#125;&#61;"asus-xonar-essence-stx.conf" <br />
}}<br />
<br />
Now, create a configuration file. If you bother, you can start from scratch and make it saucy. However you can also use the default configuration file, rename it, and then add your profile there that you know works. Less pretty but also faster.<br />
<br />
To enable multiple sinks for Asus Xonar Essence STX you need only to add this in.<br />
<br />
{{Note|{{ic|asus-xonar-essence-stx.conf}} also includes all code/mappings from {{ic|default.conf}}.}}<br />
<br />
{{hc|/usr/share/pulseaudio/alsa-mixer/profile-sets/asus-xonar-essence-stx.conf|<br />
[Profile analog-stereo+iec958-stereo]<br />
description &#61; Analog Stereo Duplex + Digital Stereo Output<br />
input-mappings &#61; analog-stereo<br />
output-mappings &#61; analog-stereo iec958-stereo<br />
skip-probe &#61; yes<br />
}}<br />
<br />
This will auto-profile your Asus Xonar Essence STX with default profiles and add your own profile so you can have multiple sinks.<br />
<br />
You need to create another profile in the configuration file if you want to have the same functionality with AC3 Digital 5.1 output.<br />
<br />
[http://www.freedesktop.org/wiki/Software/PulseAudio/Backends/ALSA/Profiles/ See PulseAudio article about profiles]<br />
<br />
=== Some profiles like SPDIF are not enabled by default on the card ===<br />
<br />
Some profiles like IEC-958 (i.e. S/PDIF) may not be enabled by default on the selected sink. Each time the system starts up, the card profile is disabled and the pulseaudio daemon cannot select it.<br />
You have to add the profile selection to you default.pa file. <br />
Verify the card and profile name with :<br />
<br />
$ pacmd list-cards<br />
Then edit the config to add the profile<br />
{{hc|~/.config/pulse/default.pa|<br />
## Replace with your card name and the profile you want to activate<br />
set-card-profile alsa_card.pci-0000_00_1b.0 output:iec958-stereo+input:analog-stereo<br />
}}<br />
<br />
Pulse audio will add this profile the pool of available profiles<br />
<br />
== Bluetooth ==<br />
<br />
=== Disable Bluetooth support ===<br />
<br />
If you do not use Bluetooth, you may experience the following error in your journal:<br />
<br />
bluez5-util.c: GetManagedObjects() failed: org.freedesktop.DBus.Error.ServiceUnknown: The name org.bluez was not provided by any .service files<br />
<br />
To disable Bluetooth support in PulseAudio, make sure that the following lines are commented out in the configuration file in use ({{ic|~/.config/pulse/default.pa}} or {{ic|/etc/pulse/default.pa}}):<br />
<br />
{{hc|~/.config/pulse/default.pa|<br />
### Automatically load driver modules for Bluetooth hardware<br />
#.ifexists module-bluetooth-policy.so<br />
#load-module module-bluetooth-policy<br />
#.endif<br />
<br />
#.ifexists module-bluetooth-discover.so<br />
#load-module module-bluetooth-discover<br />
#.endif<br />
}}<br />
<br />
=== Bluetooth headset replay problems ===<br />
<br />
Some user [https://bbs.archlinux.org/viewtopic.php?id=117420 reports] huge delays or even no sound when the Bluetooth connection does not send any data. This is due to the {{ic|module-suspend-on-idle}} module, which automatically suspends sinks/sources on idle. As this can cause problems with headset, the responsible module can be deactivated.<br />
<br />
To disable loading of the {{ic|module-suspend-on-idle}} module, comment out the following line in the configuration file in use ({{ic|~/.config/pulse/default.pa}} or {{ic|/etc/pulse/default.pa}}):<br />
<br />
{{hc|~/.config/pulse/default.pa|<br />
### Automatically suspend sinks/sources that become idle for too long<br />
#load-module module-suspend-on-idle<br />
}}<br />
<br />
Finally restart PulseAudio to apply the changes.<br />
<br />
=== Automatically switch to Bluetooth or USB headset ===<br />
<br />
Add the following:<br />
{{hc|/etc/pulse/default.pa|<br />
# automatically switch to newly-connected devices<br />
load-module module-switch-on-connect<br />
}}<br />
<br />
=== My Bluetooth device is paired but does not play any sound ===<br />
<br />
[[Bluetooth#My_device_is_paired_but_no_sound_is_played_from_it|See the article in Bluetooth section]]<br />
<br />
Starting from PulseAudio 2.99 and bluez 4.101 you should '''avoid''' using Socket interface. Do NOT use:<br />
<br />
{{hc|/etc/bluetooth/audio.conf|<nowiki><br />
[General]<br />
Enable=Socket<br />
</nowiki>}}<br />
<br />
If you face problems with A2DP and PA 2.99 make sure you have {{Pkg|sbc}} library.<br />
<br />
== Applications ==<br />
<br />
=== Flash content ===<br />
<br />
Since Adobe Flash does not directly support PulseAudio, the recommended way is to [[PulseAudio#ALSA|configure ALSA to use the virtual PulseAudio sound card]].<br />
<br />
If Flash audio is lagging, you may try to have Flash access ALSA directly. See [[PulseAudio#ALSA/dmix without grabbing hardware device]] for details.<br />
<br />
=== Permission errors bug ===<br />
<br />
{{hc|pulseaudio --start|<br />
E: [autospawn] core-util.c: Failed to create secure directory (/run/user/1000/pulse): Operation not permitted<br />
W: [autospawn] lock-autospawn.c: Cannot access autospawn lock.<br />
E: [pulseaudio] main.c: Failed to acquire autospawn lock}}<br />
<br />
Known programs that changes permissions for {{ic|/run/user/''user id''/pulse}} when using [[Polkit]] for root elevation:<br />
<br />
*{{AUR|sakis3g}} <br />
<br />
As a workaround, include {{Pkg|gksu}} or {{Pkg|kdesu}} in a [[desktop entry]], or add {{ic|1=''username'' ALL=NOPASSWD: /usr/bin/''program_name''}} to [[sudoers]] to run it with {{Pkg|sudo}} or {{ic|gksudo}} without a password.<br />
<br />
The other workaround is to uncomment and set {{ic|1=daemonize = yes}} in the {{ic|/etc/pulse/daemon.conf}}.<br />
<br />
See also [https://bbs.archlinux.org/viewtopic.php?id=135955].<br />
<br />
=== Audacity ===<br />
<br />
When starting Audacity you may find that your headphones no longer work. This can be because Audacity is trying to use them as a recording device. To fix this, open Audacity, then set its recording device to {{ic|1=pulse:Internal Mic:0}}.<br />
<br />
Under some circumstances, playback may be distorted, very fast, or freeze, as discussed in the [http://wiki.audacityteam.org/wiki/Linux_Issues#ALSA_and_other_sound_systems Audacity Wiki's Linux Issues page].<br />
<br />
The solution proposed in this page may work: start Audacity with:<br />
<br />
$ env PULSE_LATENCY_MSEC=30 audacity<br />
<br />
If the solution above does not fix this issue, one may wish to temporarily disable pulseaudio while running Audacity by using the {{ic|pasuspender}} command:<br />
<br />
$ pasuspender -- audacity<br />
<br />
Then, be sure to select the appropriate ALSA input and output devices in Audacity.<br />
<br />
See also [[#Setting the default fragment number and buffer size in PulseAudio]].<br />
<br />
== Other Issues ==<br />
<br />
=== Bad configuration files ===<br />
<br />
After starting PulseAudio, if the system outputs no sound, it may be necessary to delete the contents of {{ic|~/.config/pulse}} and/or {{ic|~/.pulse}}. PulseAudio will automatically create new configuration files on its next start.<br />
<br />
=== Can't update configuration of sound device in pavucontrol ===<br />
<br />
{{Pkg|pavucontrol}} is a handy GUI utility for configuring PulseAudio. Under its 'Configuration' tab, you can select different profiles for each of your sound devices e.g. analogue stereo, digital output (IEC958), HDMI 5.1 Surround etc.<br />
<br />
However, you may run into an instance where selecting a different profile for a card results in the pulse daemon crashing and auto restarting without the new selection "sticking". If this occurs, use the other useful GUI tool, {{Pkg|paprefs}}, to check under the "Simultaneous Output" tab for a virtual simultaneous device. If this setting is active (checked), it will prevent you changing any card's profile in pavucontrol. Uncheck this setting, then adjust your profile in pavucontrol prior to re-enabling simultaneous output in paprefs.<br />
<br />
=== Failed to create sink input: sink is suspended ===<br />
<br />
If you do not have any output sound and receive dozens of errors related to a suspended sink in your {{ic|journalctl -b}} log, then backup first and then delete your user-specific pulse folders:<br />
<br />
$ rm -r ~/.pulse ~/.pulse-cookie ~/.config/pulse<br />
<br />
=== Pulse overwrites ALSA settings ===<br />
<br />
PulseAudio usually overwrites the ALSA settings — for example set with alsamixer — at start-up, even when the ALSA daemon is loaded. Since there seems to be no other way to restrict this behaviour, a workaround is to restore the ALSA settings again after PulseAudio has started. Add the following command to {{ic|.xinitrc}} or {{ic|.bash_profile}} or any other [[autostart]] file:<br />
<br />
restore_alsa() {<br />
while [ -z "$(pidof pulseaudio)" ]; do<br />
sleep 0.5<br />
done<br />
alsactl -f /var/lib/alsa/asound.state restore <br />
}<br />
restore_alsa &<br />
<br />
=== Prevent Pulse from restarting after being killed ===<br />
<br />
Sometimes you may wish to temporarily disable Pulse. In order to do so you will have to prevent Pulse from restarting after being killed.<br />
<br />
{{hc|~/.config/pulse/client.conf|2=<br />
# Disable autospawning the PulseAudio daemon<br />
autospawn = no<br />
}}<br />
<br />
=== Daemon startup failed ===<br />
<br />
Try resetting PulseAudio:<br />
<br />
$ rm -rf /tmp/pulse* ~/.pulse* ~/.config/pulse<br />
$ pulseaudio -k<br />
$ pulseaudio --start<br />
<br />
* Check that options for sinks are set up correctly.<br />
<br />
* If you configured in default.pa to load and use the OSS modules then check with {{Pkg|lsof}} that {{ic|/dev/dsp}} device is not used by another application.<br />
<br />
* LXDE may have a problem with closing all applications after the user logged out to fix it look [[LXDM#Incorrect logout handling|Incorrect logout handling]].<br />
<br />
* Set a preferred working resample method. Use {{ic|pulseaudio --dump-resample-methods}} to see a list with all available resample methods you can use.<br />
<br />
* To get details about currently appeared unfixed errors or just get status of daemon use commands like {{ic|pax11publish -d}} and {{ic|pulseaudio -v}} where {{ic|v}} option can be used multiple time to set verbosity of log output equal to the {{ic|1=--log-level[=LEVEL]}} option where LEVEL is from 0 to 4. See the [[PulseAudio#Outputs by PulseAudio error status check utilities|Outputs by PulseAudio error status check utilities]] section.<br />
<br />
See also man pages for [http://linux.die.net/man/1/pax11publish pax11publish] and [http://linux.die.net/man/1/pulseaudio pulseaudio] for more details.<br />
<br />
==== Outputs by PulseAudio error status check utilities ====<br />
<br />
If the {{ic|pax11publish -d}} shows error like:<br />
<br />
N: [pulseaudio] main.c: User-configured server at "user", refusing to start/autospawn.<br />
<br />
then run {{ic|pax11publish -r}} command then could be also good to logout and login again. This manual cleanup is always required when using LXDM because it does not restart the X server on logout; see [[LXDM#PulseAudio]].<br />
<br />
If the {{ic|pulseaudio -vvvv}} command shows error like:<br />
<br />
E: [pulseaudio] module-udev-detect.c: You apparently ran out of inotify watches, probably because Tracker/Beagle took them all away. I wished people would do their homework first and fix inotify before using it for watching whole directory trees which is something the current inotify is certainly not useful for. Please make sure to drop the Tracker/Beagle guys a line complaining about their broken use of inotify.<br />
<br />
This can be resolved temporary by:<br />
$ echo 100000 > /proc/sys/fs/inotify/max_user_watches<br />
<br />
For permanent use save settings in the ''99-sysctl.conf'' file:<br />
<br />
{{hc|/etc/sysctl.d/99-sysctl.conf|2=<br />
# Increase inotify max watchs per user<br />
fs.inotify.max_user_watches = 100000}}<br />
<br />
{{Warning|It may cause much bigger consumption of memory by kernel.}}<br />
<br />
'''See also''' <br />
<br />
* [http://www.linuxinsight.com/proc_sys_fs_inotify.html proc_sys_fs_inotify] and [http://lwn.net/Articles/604686/ dnotify, inotify]- more details about ''inotify/max_user_watches''<br />
* [http://stackoverflow.com/questions/535768/what-is-a-reasonable-amount-of-inotify-watches-with-linux?answertab=votes#tab-top reasonable amount of inotify watches with Linux]<br />
* [http://linux.die.net/man/7/inotify inotify] - man page<br />
<br />
=== Daemon already running ===<br />
<br />
On some systems, PulseAudio may be started multiple times. journalctl will report:<br />
<br />
[pulseaudio] pid.c: Daemon already running.<br />
<br />
Make sure to use only one method of autostarting applications. {{Pkg|pulseaudio}} includes these files:<br />
<br />
* {{ic|/etc/X11/xinit/xinitrc.d/pulseaudio}}<br />
* {{ic|/etc/xdg/autostart/pulseaudio.desktop}}<br />
* {{ic|/etc/xdg/autostart/pulseaudio-kde.desktop}}<br />
<br />
Also check user autostart files and directories, such as [[xinitrc]], {{ic|~/.config/autostart/}} etc.<br />
<br />
=== Subwoofer stops working after end of every song ===<br />
<br />
Known issue: https://bugs.launchpad.net/ubuntu/+source/pulseaudio/+bug/494099<br />
<br />
To fix this, must edit: {{ic|/etc/pulse/daemon.conf}} and enable {{ic|enable-lfe-remixing}} :<br />
{{hc|/etc/pulse/daemon.conf|<nowiki><br />
enable-lfe-remixing = yes<br />
</nowiki>}}<br />
<br />
=== Unable to select surround configuration other than "Surround 4.0" ===<br />
<br />
If you're unable to set 5.1 surround output in pavucontrol because it only shows "Analog Surround 4.0 Output", open the ALSA mixer and change the output configuration there to 6 channels. Then restart pulseaudio, and pavucontrol will list many more options.<br />
<br />
=== Realtime scheduling ===<br />
<br />
If rtkit does not work, you can manually set up your system to run PulseAudio with real-time scheduling, which can help performance. To do this, add the following lines to {{ic|/etc/security/limits.conf}}:<br />
<br />
@pulse-rt - rtprio 9<br />
@pulse-rt - nice -11<br />
<br />
Afterwards, you need to add your user to the {{ic|pulse-rt}} group:<br />
<br />
# gpasswd -a <user> pulse-rt<br />
<br />
=== pactl "invalid option" error with negative percentage arguments ===<br />
<br />
{{ic|pactl}} commands that take negative percentage arguments will fail with an 'invalid option' error. Use the standard shell '--' pseudo argument<br />
to disable argument parsing before the negative argument. ''e.g.'' {{ic|pactl set-sink-volume 1 -- -5%}}.<br />
<br />
=== Fallback device is not respected ===<br />
<br />
PulseAudio does not have a true default device. Instead it uses a [http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/DefaultDevice/ "fallback"], which only applies to new sound streams. This means previously run applications are not affected by the newly set fallback device.<br />
<br />
{{Pkg|gnome-control-center}}, {{Pkg|mate-media-pulseaudio}}{{Broken package link|replaced by {{Pkg|mate-media}}}} and {{AUR|paswitch}} handle this gracefully. Alternatively: <br />
<br />
1. Move the old streams in {{Pkg|pavucontrol}} manually to the new sound card.<br />
<br />
2. Stop Pulse, erase the "stream-volumes" in {{ic|~/.config/pulse}} and/or {{ic|~/.pulse}} and restart Pulse. This also resets application volumes.<br />
<br />
3. Disable stream device reading. This may be not wanted when using different soundcards with different applications.<br />
<br />
{{hc|/etc/pulse/default.pa|<nowiki><br />
load-module module-stream-restore restore_device=false<br />
</nowiki>}}</div>Mousemanhttps://wiki.archlinux.org/index.php?title=Intel_graphics&diff=414326Intel graphics2016-01-03T17:40:36Z<p>Mouseman: /* Configuration */</p>
<hr />
<div>[[Category:Graphics]]<br />
[[Category:X server]]<br />
[[cs:Intel graphics]]<br />
[[de:Intel]]<br />
[[es:Intel graphics]]<br />
[[fr:Intel]]<br />
[[hu:Intel graphics]]<br />
[[it:Intel graphics]]<br />
[[ja:Intel Graphics]]<br />
[[pl:Intel graphics]]<br />
[[ru:Intel graphics]]<br />
[[zh-cn:Intel graphics]]<br />
[[zh-tw:Intel graphics]]<br />
{{Related articles start}}<br />
{{Related|Intel GMA3600}}<br />
{{Related|Poulsbo}}<br />
{{Related|Xorg}}<br />
{{Related|Kernel mode setting}}<br />
{{Related|Xrandr}}<br />
{{Related|Hybrid graphics}}<br />
{{Related articles end}}<br />
<br />
Since Intel provides and supports open source drivers, Intel graphics are now essentially plug-and-play.<br />
<br />
For a comprehensive list of Intel GPU models and corresponding chipsets and CPUs, see [[Wikipedia:Comparison of Intel graphics processing units|this comparison on Wikipedia]].<br />
<br />
{{Note|PowerVR-based graphics ([[GMA 500]] and [[Intel GMA3600|GMA 3600]] series) are not supported by open source drivers.}}<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|xf86-video-intel}} package. It provides the DDX driver for 2D acceleration and it pulls in {{Pkg|mesa}} as a dependency, providing the DRI driver for 3D acceleration.<br />
<br />
To enable OpenGL support, also install {{Pkg|mesa-libgl}}. If you are on x86_64 and need 32-bit support, also install {{Pkg|lib32-mesa-libgl}} from the [[multilib]] repository.<br />
<br />
Follow [[VA-API]] and [[VDPAU]] for hardware-accelerated video processing; on older GPUs, this is provided instead by the [[XvMC]] driver, which is included with the DDX driver.<br />
<br />
== Configuration ==<br />
<br />
There is no need for any configuration to run [[Xorg]].<br />
<br />
{{Note|The latest generation of integrated GPUs (Skylake/HD 530 for instance) may require the {{ic|1=i915.preliminary_hw_support=1}} [[kernel parameter]] to boot properly. As of kernel 4.3.x, this should no longer be necessary.}}<br />
<br />
However, to take advantage of some driver options, you will need to create a Xorg configuration file similar to the one below:<br />
<br />
{{hc|/etc/X11/xorg.conf.d/20-intel.conf|<br />
Section "Device"<br />
Identifier "Intel Graphics"<br />
Driver "intel"<br />
EndSection}}<br />
<br />
Additional options are added by the user on new lines below {{ic|Driver}}.<br />
<br />
{{Note|<br />
*You may need to indicate {{ic|AccelMethod}} when creating a configuration file, even just to set it to the default method (currently {{ic|"sna"}}); otherwise, X may crash.<br />
*You might need to add more device sections than the one listed above. This will be indicated where necessary.}} <br />
<br />
For the full list of options, see the [[man page]] for {{ic|intel}}.<br />
<br />
== Loading ==<br />
<br />
The Intel kernel module should load fine automatically on system boot.<br />
<br />
If it does not happen, then:<br />
<br />
* Make sure you do '''not''' have {{ic|nomodeset}} or {{ic|1=vga=}} as a [[kernel parameter]], since Intel requires kernel mode-setting.<br />
* Also, check that you have not disabled Intel by using any modprobe blacklisting within {{ic|/etc/modprobe.d/}} or {{ic|/usr/lib/modprobe.d/}}.<br />
<br />
=== Enable early KMS ===<br />
<br />
{{Tip|If you have problems with the resolution, you can check whether [[Kernel mode setting#Forcing modes and EDID|enforcing the mode]] helps.}}<br />
<br />
[[Kernel mode setting]] (KMS) is supported by Intel chipsets that use the i915 DRM driver and is mandatory and enabled by default. <br />
<br />
KMS is typically initialized after the [[Arch boot process#initramfs|initramfs stage]]. It is possible, however, to enable KMS during the initramfs stage. To do this, add the {{ic|i915}} module to the {{ic|MODULES}} line in {{ic|/etc/mkinitcpio.conf}}:<br />
<br />
MODULES="... i915 ..."<br />
<br />
{{Tip|<br />
Users might need to add {{Ic|intel_agp}} before {{Ic|i915}} to suppress the ACPI errors. The order matters because the modules are activated in sequence. This might be required for resuming from hibernation to work with changed display configuration!}}<br />
<br />
If you are using a custom [[Wikipedia:Extended display identification data|EDID]] file, you should embed it into initramfs as well:<br />
<br />
{{hc|/etc/mkinitcpio.conf|<br />
2=FILES="/lib/firmware/edid/your_edid.bin"}}<br />
<br />
Now, regenerate the initramfs:<br />
<br />
# mkinitcpio -p linux<br />
<br />
The change takes effect at the next reboot.<br />
<br />
== Module-based Powersaving Options ==<br />
<br />
The {{ic|i915}} kernel module allows for configuration via [[Kernel modules#Setting module options|module options]]. Some of the module options impact power saving.<br />
<br />
A list of all options along with short descriptions and default values can be generated with the following command:<br />
<br />
$ modinfo -p i915<br />
<br />
To check which options are currently enabled, run<br />
<br />
# systool -m i915 -av<br />
<br />
You will note that the {{ic|i915.powersave}} option which "enable[s] powersavings, fbc, downclocking, etc." is enabled by default, resulting in per-chip powersaving defaults. It is however possible to configure more aggressive powersaving by using [[Kernel modules#Setting module options|module options]].<br />
<br />
{{Warning|1=Diverting from the defaults will mark the kernel as [https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=fc9740cebc3ab7c65f3c5f6ce0caf3e4969013ca tainted] from Linux 3.18 onwards. This basically implies using other options than the per-chip defaults is considered experimental and not supported by the developers. }}<br />
<br />
The following set of options should be generally safe to enable:<br />
<br />
{{hc|/etc/modprobe.d/i915.conf|<nowiki><br />
options i915 enable_rc6=1 enable_fbc=1 lvds_downclock=1 semaphores=1<br />
</nowiki>}}<br />
<br />
You can experiment with higher values for {{ic|enable_rc6}}, but your GPU may not support them or the activation of the other options [https://wiki.archlinux.org/index.php?title=Talk:Intel_Graphics&oldid=327547#Kernel_Module_options].<br />
<br />
Framebuffer compression, for example, may be unreliable or unavailable on Intel GPU generations before Sandy Bridge (generation 6). This results in messages logged to the system journal similar to this one:<br />
kernel: drm: not enough stolen space for compressed buffer, disabling.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Enable Glamor Acceleration Method ===<br />
<br />
[https://wiki.freedesktop.org/www/Software/Glamor/ Glamor] is Intel's experimental OpenGL 2D acceleration method and is not documented in the manpages. To use it, add the following line to your [[#Configuration|configuration file]]:<br />
Option "AccelMethod" "glamor"<br />
<br />
{{Note|This acceleration method is experimental and may not be stable for your system.}}<br />
<br />
=== Direct Rendering Infrastructure 3 (DRI3) ===<br />
<br />
By default Direct Rendering Infrastructure 2 (DRI2) is used. To enable the next generation of DRI, [[Wikipedia:Direct_Rendering_Infrastructure#DRI3|DRI3]], which contains several improvements, add the following line to your [[#Configuration|configuration file]]:<br />
Option "DRI" "3"<br />
<br />
To verify that DRI3 is enabled you can check the [[Xorg]] log files after restarting.<br />
<br />
=== Tear-free video ===<br />
<br />
The SNA acceleration method causes tearing for some people. To fix this, enable the {{ic|"TearFree"}} option in the driver by adding the following line to your [[#Configuration|configuration file]]:<br />
Option "TearFree" "true"<br />
<br />
See the [https://bugs.freedesktop.org/show_bug.cgi?id=37686 original bug report] for more info.<br />
<br />
{{Note|<br />
* This option may not work when {{ic|SwapbuffersWait}} is {{ic|false}}.<br />
* This option is problematic for applications that are very picky about vsync timing, like [[Wikipedia:Super Meat Boy|Super Meat Boy]].<br />
* This option does not work with UXA acceleration method, only with SNA.<br />
}}<br />
<br />
=== Disable Vertical Synchronization (VSYNC) ===<br />
The intel-driver uses [http://www.intel.com/support/graphics/sb/CS-004527.htm Triple Buffering] for vertical synchronization, this allows for full performance and avoids tearing. To turn vertical synchronization off (e.g. for benchmarking) use this {{ic|.drirc}} in your home directory:<br />
<br />
{{hc|~/.drirc|<br />
<device screen&#61;"0" driver&#61;"dri2"><br />
<application name&#61;"Default"><br />
<option name&#61;"vblank_mode" value&#61;"0"/><br />
</application><br />
</device>}}<br />
<br />
{{Warning|Do not use {{Pkg|driconf}} to create this file, it is buggy and will set the wrong driver.}}<br />
<br />
=== Setting scaling mode ===<br />
<br />
This can be useful for some full screen applications:<br />
<br />
$ xrandr --output LVDS1 --set PANEL_FITTING param<br />
<br />
where {{ic|param}} can be:<br />
<br />
* {{ic|center}}: resolution will be kept exactly as defined, no scaling will be made,<br />
* {{ic|full}}: scale the resolution so it uses the entire screen or<br />
* {{ic|full_aspect}}: scale the resolution to the maximum possible but keep the aspect ratio.<br />
<br />
If it does not work, try:<br />
<br />
$ xrandr --output LVDS1 --set "scaling mode" param<br />
<br />
where {{ic|param}} is one of {{ic|"Full"}}, {{ic|"Center"}} or {{ic|"Full aspect"}}.<br />
<br />
=== KMS Issue: console is limited to small area ===<br />
<br />
One of the low-resolution video ports may be enabled on boot which is causing the terminal to utilize a small area of the screen. To fix, explicitly disable the port with an i915 module setting with {{ic|1=video=SVIDEO-1:d}} in the kernel command line parameter in the bootloader. See [[Kernel parameters]] for more info.<br />
<br />
If that does not work, try disabling TV1 or VGA1 instead of SVIDEO-1.<br />
<br />
=== H.264 decoding on GMA 4500 ===<br />
<br />
The {{Pkg|libva-intel-driver}} package provides MPEG-2 decoding only for GMA 4500 series GPUs. The H.264 decoding support is maintained in a separated g45-h264 branch, which can be used by installing {{AUR|libva-intel-driver-g45-h264}} package. Note however that this support is experimental and its development has been abandoned. Using the VA-API with this driver on a GMA 4500 series GPU will offload the CPU but may not result in as smooth a playback as non-accelerated playback. Tests using mplayer showed that using vaapi to play back an H.264 encoded 1080p video halved the CPU load (compared to the XV overlay) but resulted in very choppy playback, while 720p worked reasonably well [https://bbs.archlinux.org/viewtopic.php?id=150550]. This is echoed by other experiences [http://www.emmolution.org/?p=192&cpage=1#comment-12292].<br />
<br />
=== Setting brightness and gamma ===<br />
<br />
See [[Backlight]].<br />
<br />
== Troubleshooting ==<br />
<br />
=== SNA issues ===<br />
From {{ic|man 4 intel}}:<br />
:''There are a couple of backends available for accelerating the DDX. "UXA" (Unified Acceleration Architecture) is the mature backend that was introduced to support the GEM driver model. It is in the process of being superseded by "SNA" (Sandybridge's New Acceleration). Until that process is complete, the ability to choose which backend to use remains for backwards compatibility.''<br />
<br />
''SNA'' is the default acceleration method in {{Pkg|xf86-video-intel}}. If you are experience issues with ''SNA'' (e.g. pixelated graphics, corrupt text, etc.), try using ''UXA'' instead, which can be done by adding the following line to your [[#Configuration|configuration file]]:<br />
Option "AccelMethod" "uxa"<br />
<br />
=== Blank screen during boot, when "Loading modules" ===<br />
<br />
If using "late start" KMS and the screen goes blank when "Loading modules", it may help to add {{ic|i915}} and {{ic|intel_agp}} to the initramfs. See [[Kernel mode setting#Early KMS start]] section.<br />
<br />
Alternatively, appending the following [[kernel parameter]] seems to work as well:<br />
<br />
video=SVIDEO-1:d<br />
<br />
If you need to output to VGA then try this:<br />
<br />
video=VGA-1:1280x800<br />
<br />
=== X freeze/crash with intel driver ===<br />
<br />
Some issues with X crashing, GPU hanging, or problems with X freezing, can be fixed by disabling the GPU usage with the {{ic|NoAccel}} option - add the following lines to your [[#Configuration|configuration file]]:<br />
Option "NoAccel" "True"<br />
<br />
Alternatively, try to disable the 3D acceleration only with the {{ic|DRI}} option:<br />
Option "DRI" "False"<br />
<br />
If you experience crashes and have<br />
<br />
Option "TearFree" "true"<br />
Option "AccelMethod" "sna"<br />
<br />
in your configuration file, in most cases these can be fixed by adding<br />
<br />
i915.semaphores=1<br />
<br />
to your boot parameters.<br />
<br />
If you are using kernel 4.0.X or above on Baytrail architecture and frequently encounter complete system freezes (especially when watching video or using GFX intensivelly), you should try adding the following kernel option as a workaround, until [https://bugzilla.kernel.org/show_bug.cgi?id=109051 this bug] will be fixed permanently.<br />
<br />
intel_idle.max_cstate=1<br />
<br />
=== Adding undetected resolutions ===<br />
<br />
This issue is covered on the [[Xrandr#Adding undetected resolutions|Xrandr page]].<br />
<br />
=== Weathered colors (color range problem) ===<br />
<br />
{{Note|This problem is related to the [http://lists.freedesktop.org/archives/dri-devel/2013-January/033576.html changes] in the kernel 3.9. This problem still remains in kernel 4.1.}}<br />
Kernel 3.9 contains a new default "Automatic" mode for the "Broadcast RGB" property in the Intel driver. It is almost equivalent to "Limited 16:235" (instead of the old default "Full") whenever an HDMI/DP output is in a [http://raspberrypi.stackexchange.com/questions/7332/what-is-the-difference-between-cea-and-dmt CEA mode]. If a monitor does not support signal in limited color range, it will cause weathered colors.<br />
<br />
{{Note|Some monitors/TVs support both color range. In that case an option often known as ''Black Level'' may need to be adjusted to make them handle the signal correctly.}}<br />
<br />
One can force mode e.g. {{ic|xrandr --output <HDMI> --set "Broadcast RGB" "Full"}} (replace {{ic|<HDMI>}} with the appropriate output device, verify by running {{ic|xrandr}}). You can add it into your {{ic |.xprofile}}, make it executable to run the command before it will start the graphical mode.<br />
<br />
{{Note|Some TVs can handle signal in limited range only. Setting Broadcast RGB to "Full" will cause color clipping. You may need to set it to "Limited 16:235" manually to avoid the clipping.}}<br />
<br />
Also there are other related problems which can be fixed editing GPU registers. More information can be found [http://lists.freedesktop.org/archives/intel-gfx/2012-April/016217.html] and [http://github.com/OpenELEC/OpenELEC.tv/commit/09109e9259eb051f34f771929b6a02635806404c].<br />
<br />
Unfortunately, the Intel driver does not support setting the color range through an {{ic|xorg.conf.d}} configuration file.<br />
<br />
A [https://bugzilla.kernel.org/show_bug.cgi?id=94921 bug report] is filed and a patch can be found in the attachment.<br />
<br />
=== Backlight is not adjustable===<br />
<br />
If after resuming from suspend, the hotkeys for changing the screen brightness do not take effect, check your configuration against the [[Backlight]] article.<br />
<br />
If the problem persists, try one of the following [[kernel parameters]]:<br />
<br />
acpi_osi=Linux<br />
acpi_osi="!Windows 2012"<br />
acpi_osi=<br />
<br />
=== Disabling frame buffer compression ===<br />
<br />
Enabling frame buffer compression on pre-Sandy Bridge CPUs results in endless error messages:<br />
<br />
$ dmesg |tail <br />
[ 2360.475430] [drm] not enough stolen space for compressed buffer (need 4325376 bytes), disabling<br />
[ 2360.475437] [drm] hint: you may be able to increase stolen memory size in the BIOS to avoid this<br />
<br />
The solution is to disable frame buffer compression which will slightly increase power consumption. In order to disable it add {{ic|i915.enable_fbc&#61;0}} to the kernel line parameters. More information on the results of disabled compression can be found [http://zinc.canonical.com/~cking/power-benchmarking/background-colour-and-framebuffer-compression/results.txt here].<br />
<br />
=== Corruption/Unresponsiveness in Chromium and Firefox ===<br />
<br />
If you experience corruption or unresponsiveness in Chromium and/or Firefox [[#SNA issues|set the AccelMethod to "uxa"]].<br />
<br />
=== Kernel crashing w/kernels 4.0+ on Broadwell/Core-M chips ===<br />
<br />
A few seconds after X/Wayland loads the machine will freeze and journalctl will log a kernel crash referencing the Intel graphics as below:<br />
<br />
Jun 16 17:54:03 hostname kernel: BUG: unable to handle kernel NULL pointer dereference at (null)<br />
Jun 16 17:54:03 hostname kernel: IP: [< (null)>] (null)<br />
...<br />
Jun 16 17:54:03 hostname kernel: CPU: 0 PID: 733 Comm: gnome-shell Tainted: G U O 4.0.5-1-ARCH #1<br />
...<br />
Jun 16 17:54:03 hostname kernel: Call Trace:<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa055cc27>] ? i915_gem_object_sync+0xe7/0x190 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa0579634>] intel_execlists_submission+0x294/0x4c0 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa05539fc>] i915_gem_do_execbuffer.isra.12+0xabc/0x1230 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa055d349>] ? i915_gem_object_set_to_cpu_domain+0xa9/0x1f0 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff811ba2ae>] ? __kmalloc+0x2e/0x2a0<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa0555471>] i915_gem_execbuffer2+0x141/0x2b0 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa042fcab>] drm_ioctl+0x1db/0x640 [drm]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa0555330>] ? i915_gem_execbuffer+0x450/0x450 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff8122339b>] ? eventfd_ctx_read+0x16b/0x200<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff811ebc36>] do_vfs_ioctl+0x2c6/0x4d0<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff811f6452>] ? __fget+0x72/0xb0<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff811ebec1>] SyS_ioctl+0x81/0xa0<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff8157a589>] system_call_fastpath+0x12/0x17<br />
Jun 16 17:54:03 hostname kernel: Code: Bad RIP value.<br />
Jun 16 17:54:03 hostname kernel: RIP [< (null)>] (null)<br />
<br />
This can be fixed by disabling execlist support which was changed to default on with kernel 4.0. Add the following kernel parameter:<br />
i915.enable_execlists=0<br />
<br />
This is known to be broken to at least kernel 4.0.5.<br />
<br />
===Driver not working for Intel Skylake chips===<br />
<br />
For the driver to work on the new Intel Skylake (6th gen.) GPUs, {{ic|i915.preliminary_hw_support&#61;1}} must be added to your boot parameters.<br />
<br />
== See also ==<br />
<br />
* https://01.org/linuxgraphics/documentation (includes a list of supported hardware)</div>Mousemanhttps://wiki.archlinux.org/index.php?title=Intel_graphics&diff=410467Intel graphics2015-11-28T19:23:10Z<p>Mouseman: /* Configuration */</p>
<hr />
<div>[[Category:Graphics]]<br />
[[Category:X server]]<br />
[[cs:Intel graphics]]<br />
[[de:Intel]]<br />
[[es:Intel graphics]]<br />
[[fr:Intel]]<br />
[[hu:Intel graphics]]<br />
[[it:Intel graphics]]<br />
[[ja:Intel Graphics]]<br />
[[pl:Intel graphics]]<br />
[[ru:Intel graphics]]<br />
[[zh-cn:Intel graphics]]<br />
[[zh-tw:Intel graphics]]<br />
{{Related articles start}}<br />
{{Related|Intel GMA3600}}<br />
{{Related|Poulsbo}}<br />
{{Related|Xorg}}<br />
{{Related|Kernel mode setting}}<br />
{{Related|Xrandr}}<br />
{{Related|Hybrid graphics}}<br />
{{Related articles end}}<br />
<br />
Since Intel provides and supports open source drivers, Intel graphics are now essentially plug-and-play.<br />
<br />
For a comprehensive list of Intel GPU models and corresponding chipsets and CPUs, see [[Wikipedia:Comparison of Intel graphics processing units|this comparison on Wikipedia]].<br />
<br />
{{Note|PowerVR-based graphics ([[GMA 500]] and [[Intel GMA3600|GMA 3600]] series) are not supported by open source drivers.}}<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|xf86-video-intel}} package. It provides the DDX driver for 2D acceleration and it pulls in {{Pkg|mesa}} as a dependency, providing the DRI driver for 3D acceleration.<br />
<br />
To enable OpenGL support, also install {{Pkg|mesa-libgl}}. If you are on x86_64 and need 32-bit support, also install {{Pkg|lib32-mesa-libgl}} from the [[multilib]] repository.<br />
<br />
Install the [[VA-API]] driver and library provided by the {{Pkg|libva-intel-driver}} and {{Pkg|libva}} packages respectively. On older GPUs, this is provided instead by the [[XvMC]] driver, which is included with the DDX driver.<br />
<br />
== Configuration ==<br />
<br />
There is no need for any configuration to run [[Xorg]].<br />
<br />
{{Note|<br />
The latest generation of integrated GPU's (Skylake/HD 530 for instance) may require 'i915.preliminary_hw_support<nowiki>=1</nowiki>' as kernel parameter too boot propery. Search the wiki for more info on adding kernel boot parameters for your boot loader.}} <br />
<br />
For the full list of options, see the [[man page]] for {{ic|intel}}.<br />
<br />
However, to take advantage of some driver options, you will need to create a Xorg configuration file similar to the one below:<br />
<br />
{{hc|/etc/X11/xorg.conf.d/20-intel.conf|<br />
Section "Device"<br />
Identifier "Intel Graphics"<br />
Driver "intel"<br />
EndSection}}<br />
<br />
Additional options are added by the user on new lines below {{ic|Driver}}.<br />
<br />
{{Note|<br />
*You may need to indicate {{ic|AccelMethod}} when creating a configuration file, even just to set it to the default method (currently {{ic|"sna"}}); otherwise, X may crash.<br />
*You might need to add more device sections than the one listed above. This will be indicated where necessary.}} <br />
<br />
For the full list of options, see the [[man page]] for {{ic|intel}}.<br />
<br />
== Loading ==<br />
<br />
The Intel kernel module should load fine automatically on system boot.<br />
<br />
If it does not happen, then:<br />
<br />
* Make sure you do '''not''' have {{ic|nomodeset}} or {{ic|1=vga=}} as a [[kernel parameter]], since Intel requires kernel mode-setting.<br />
* Also, check that you have not disabled Intel by using any modprobe blacklisting within {{ic|/etc/modprobe.d/}} or {{ic|/usr/lib/modprobe.d/}}.<br />
<br />
=== Enable early KMS ===<br />
<br />
{{Tip|If you have problems with the resolution, you can check whether [[Kernel mode setting#Forcing modes and EDID|enforcing the mode]] helps.}}<br />
<br />
[[Kernel mode setting]] (KMS) is supported by Intel chipsets that use the i915 DRM driver and is mandatory and enabled by default. <br />
<br />
KMS is typically initialized after the [[Arch boot process#initramfs|initramfs stage]]. It is possible, however, to enable KMS during the initramfs stage. To do this, add the {{ic|i915}} module to the {{ic|MODULES}} line in {{ic|/etc/mkinitcpio.conf}}:<br />
<br />
MODULES="... i915 ..."<br />
<br />
{{Tip|<br />
Users might need to add {{Ic|intel_agp}} before {{Ic|i915}} to suppress the ACPI errors. The order matters because the modules are activated in sequence. This might be required for resuming from hibernation to work with changed display configuration!}}<br />
<br />
If you are using a custom [[Wikipedia:Extended display identification data|EDID]] file, you should embed it into initramfs as well:<br />
<br />
{{hc|/etc/mkinitcpio.conf|<br />
2=FILES="/lib/firmware/edid/your_edid.bin"}}<br />
<br />
Now, regenerate the initramfs:<br />
<br />
# mkinitcpio -p linux<br />
<br />
The change takes effect at the next reboot.<br />
<br />
== Module-based Powersaving Options ==<br />
<br />
The {{ic|i915}} kernel module allows for configuration via [[Kernel modules#Setting module options|module options]]. Some of the module options impact power saving.<br />
<br />
A list of all options along with short descriptions and default values can be generated with the following command:<br />
<br />
$ modinfo -p i915<br />
<br />
To check which options are currently enabled, run<br />
<br />
# systool -m i915 -av<br />
<br />
You will note that the {{ic|i915.powersave}} option which "enable[s] powersavings, fbc, downclocking, etc." is enabled by default, resulting in per-chip powersaving defaults. It is however possible to configure more aggressive powersaving by using [[Kernel modules#Setting module options|module options]].<br />
<br />
{{Warning|1=Diverting from the defaults will mark the kernel as [https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=fc9740cebc3ab7c65f3c5f6ce0caf3e4969013ca tainted] from Linux 3.18 onwards. This basically implies using other options than the per-chip defaults is considered experimental and not supported by the developers. }}<br />
<br />
The following set of options should be generally safe to enable:<br />
<br />
{{hc|/etc/modprobe.d/i915.conf|<nowiki><br />
options i915 enable_rc6=1 enable_fbc=1 lvds_downclock=1 semaphores=1<br />
</nowiki>}}<br />
<br />
You can experiment with higher values for {{ic|enable_rc6}}, but your GPU may not support them or the activation of the other options [https://wiki.archlinux.org/index.php?title=Talk:Intel_Graphics&oldid=327547#Kernel_Module_options].<br />
<br />
Framebuffer compression, for example, may be unreliable or unavailable on Intel GPU generations before Sandy Bridge (generation 6). This results in messages logged to the system journal similar to this one:<br />
kernel: drm: not enough stolen space for compressed buffer, disabling.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Enable Glamor Acceleration Method ===<br />
<br />
[https://wiki.freedesktop.org/www/Software/Glamor/ Glamor] is Intel's experimental OpenGL 2D acceleration method and is not documented in the manpages. To use it, add the following line to your [[#Configuration|configuration file]]:<br />
Option "AccelMethod" "glamor"<br />
<br />
{{Note|This acceleration method is experimental and may not be stable for your system.}}<br />
<br />
=== Direct Rendering Infrastructure 3 (DRI3) ===<br />
<br />
By default Direct Rendering Infrastructure 2 (DRI2) is used. To enable the next generation of DRI, [[Wikipedia:Direct_Rendering_Infrastructure#DRI3|DRI3]], which contains several improvements, add the following line to your [[#Configuration|configuration file]]:<br />
Option "DRI" "3"<br />
<br />
To verify that DRI3 is enabled you can check the [[Xorg]] log files after restarting.<br />
<br />
=== Tear-free video ===<br />
<br />
The SNA acceleration method causes tearing for some people. To fix this, enable the {{ic|"TearFree"}} option in the driver by adding the following line to your [[#Configuration|configuration file]]:<br />
Option "TearFree" "true"<br />
<br />
See the [https://bugs.freedesktop.org/show_bug.cgi?id=37686 original bug report] for more info.<br />
<br />
{{Note|<br />
* This option may not work when {{ic|SwapbuffersWait}} is {{ic|false}}.<br />
* This option is problematic for applications that are very picky about vsync timing, like [[Wikipedia:Super Meat Boy|Super Meat Boy]].<br />
* This option does not work with UXA acceleration method, only with SNA.<br />
}}<br />
<br />
=== Disable Vertical Synchronization (VSYNC) ===<br />
The intel-driver uses [http://www.intel.com/support/graphics/sb/CS-004527.htm Triple Buffering] for vertical synchronization, this allows for full performance and avoids tearing. To turn vertical synchronization off (e.g. for benchmarking) use this {{ic|.drirc}} in your home directory:<br />
<br />
{{hc|~/.drirc|<br />
<device screen&#61;"0" driver&#61;"dri2"><br />
<application name&#61;"Default"><br />
<option name&#61;"vblank_mode" value&#61;"0"/><br />
</application><br />
</device>}}<br />
<br />
{{Warning|Do not use {{Pkg|driconf}} to create this file, it is buggy and will set the wrong driver.}}<br />
<br />
=== Setting scaling mode ===<br />
<br />
This can be useful for some full screen applications:<br />
<br />
$ xrandr --output LVDS1 --set PANEL_FITTING param<br />
<br />
where {{ic|param}} can be:<br />
<br />
* {{ic|center}}: resolution will be kept exactly as defined, no scaling will be made,<br />
* {{ic|full}}: scale the resolution so it uses the entire screen or<br />
* {{ic|full_aspect}}: scale the resolution to the maximum possible but keep the aspect ratio.<br />
<br />
If it does not work, try:<br />
<br />
$ xrandr --output LVDS1 --set "scaling mode" param<br />
<br />
where {{ic|param}} is one of {{ic|"Full"}}, {{ic|"Center"}} or {{ic|"Full aspect"}}.<br />
<br />
=== KMS Issue: console is limited to small area ===<br />
<br />
One of the low-resolution video ports may be enabled on boot which is causing the terminal to utilize a small area of the screen. To fix, explicitly disable the port with an i915 module setting with {{ic|1=video=SVIDEO-1:d}} in the kernel command line parameter in the bootloader. See [[Kernel parameters]] for more info.<br />
<br />
If that does not work, try disabling TV1 or VGA1 instead of SVIDEO-1.<br />
<br />
=== H.264 decoding on GMA 4500 ===<br />
<br />
The {{Pkg|libva-intel-driver}} package provides MPEG-2 decoding only for GMA 4500 series GPUs. The H.264 decoding support is maintained in a separated g45-h264 branch, which can be used by installing {{AUR|libva-intel-driver-g45-h264}} package, available in the [[Arch User Repository]]. Note however that this support is experimental and its development has been abandoned. Using the VA-API with this driver on a GMA 4500 series GPU will offload the CPU but may not result in as smooth a playback as non-accelerated playback. Tests using mplayer showed that using vaapi to play back an H.264 encoded 1080p video halved the CPU load (compared to the XV overlay) but resulted in very choppy playback, while 720p worked reasonably well [https://bbs.archlinux.org/viewtopic.php?id=150550]. This is echoed by other experiences [http://www.emmolution.org/?p=192&cpage=1#comment-12292].<br />
<br />
=== Setting brightness and gamma ===<br />
<br />
See [[Backlight]].<br />
<br />
== Troubleshooting ==<br />
<br />
=== SNA issues ===<br />
From {{ic|man 4 intel}}:<br />
:''There are a couple of backends available for accelerating the DDX. "UXA" (Unified Acceleration Architecture) is the mature backend that was introduced to support the GEM driver model. It is in the process of being superseded by "SNA" (Sandybridge's New Acceleration). Until that process is complete, the ability to choose which backend to use remains for backwards compatibility.''<br />
<br />
''SNA'' is the default acceleration method in {{Pkg|xf86-video-intel}}. If you are experience issues with ''SNA'' (e.g. pixelated graphics, corrupt text, etc.), try using ''UXA'' instead, which can be done by adding the following line to your [[#Configuration|configuration file]]:<br />
Option "AccelMethod" "uxa"<br />
<br />
=== Blank screen during boot, when "Loading modules" ===<br />
<br />
If using "late start" KMS and the screen goes blank when "Loading modules", it may help to add {{ic|i915}} and {{ic|intel_agp}} to the initramfs. See [[Kernel mode setting#Early KMS start]] section.<br />
<br />
Alternatively, appending the following [[kernel parameter]] seems to work as well:<br />
<br />
video=SVIDEO-1:d<br />
<br />
If you need to output to VGA then try this:<br />
<br />
video=VGA-1:1280x800<br />
<br />
=== X freeze/crash with intel driver ===<br />
<br />
Some issues with X crashing, GPU hanging, or problems with X freezing, can be fixed by disabling the GPU usage with the {{ic|NoAccel}} option - add the following lines to your [[#Configuration|configuration file]]:<br />
Option "NoAccel" "True"<br />
<br />
Alternatively, try to disable the 3D acceleration only with the {{ic|DRI}} option:<br />
Option "DRI" "False"<br />
<br />
If you experience crashes and have<br />
<br />
Option "TearFree" "true"<br />
Option "AccelMethod" "sna"<br />
<br />
in your configuration file, in most cases these can be fixed by adding<br />
<br />
i915.semaphores=1<br />
<br />
to your boot parameters.<br />
<br />
If you are using kernel 4.0.X or above on Baytrail architecture and frequently encounter complete system freezes (especially when watching video or using GFX intensivelly), you should try adding the following kernel option as a workaround, until [https://bugs.freedesktop.org/show_bug.cgi?id=88012 this bug] will be fixed permanently.<br />
<br />
intel_pstate=disable<br />
<br />
=== Adding undetected resolutions ===<br />
<br />
This issue is covered on the [[Xrandr#Adding undetected resolutions|Xrandr page]].<br />
<br />
=== Weathered colors (color range problem) ===<br />
<br />
{{Note|This problem is related to the [http://lists.freedesktop.org/archives/dri-devel/2013-January/033576.html changes] in the kernel 3.9. This problem still remains in kernel 4.1.}}<br />
Kernel 3.9 contains a new default "Automatic" mode for the "Broadcast RGB" property in the Intel driver. It is almost equivalent to "Limited 16:235" (instead of the old default "Full") whenever an HDMI/DP output is in a [http://raspberrypi.stackexchange.com/questions/7332/what-is-the-difference-between-cea-and-dmt CEA mode]. If a monitor does not support signal in limited color range, it will cause weathered colors.<br />
<br />
{{Note|Some monitors/TVs support both color range. In that case an option often known as ''Black Level'' may need to be adjusted to make them handle the signal correctly.}}<br />
<br />
One can force mode e.g. {{ic|xrandr --output <HDMI> --set "Broadcast RGB" "Full"}} (replace {{ic|<HDMI>}} with the appropriate output device, verify by running {{ic|xrandr}}). You can add it into your {{ic |.xprofile}}, make it executable to run the command before it will start the graphical mode.<br />
<br />
{{Note|Some TVs can handle signal in limited range only. Setting Broadcast RGB to "Full" will cause color clipping. You may need to set it to "Limited 16:235" manually to avoid the clipping.}}<br />
<br />
Also there are other related problems which can be fixed editing GPU registers. More information can be found [http://lists.freedesktop.org/archives/intel-gfx/2012-April/016217.html] and [http://github.com/OpenELEC/OpenELEC.tv/commit/09109e9259eb051f34f771929b6a02635806404c].<br />
<br />
Unfortunately, the Intel driver does not support setting the color range through an {{ic|xorg.conf.d}} configuration file.<br />
<br />
A [https://bugzilla.kernel.org/show_bug.cgi?id=94921 bug report] is filed and a patch can be found in the attachment.<br />
<br />
=== Backlight is not adjustable===<br />
<br />
If after resuming from suspend, the hotkeys for changing the screen brightness do not take effect, check your configuration against the [[Backlight]] article.<br />
<br />
If the problem persists, try one of the following [[kernel parameters]]:<br />
<br />
acpi_osi=Linux<br />
acpi_osi="!Windows 2012"<br />
acpi_osi=<br />
<br />
=== Disabling frame buffer compression ===<br />
<br />
Enabling frame buffer compression on pre-Sandy Bridge CPUs results in endless error messages:<br />
<br />
$ dmesg |tail <br />
[ 2360.475430] [drm] not enough stolen space for compressed buffer (need 4325376 bytes), disabling<br />
[ 2360.475437] [drm] hint: you may be able to increase stolen memory size in the BIOS to avoid this<br />
<br />
The solution is to disable frame buffer compression which will slightly increase power consumption. In order to disable it add {{ic|i915.enable_fbc&#61;0}} to the kernel line parameters. More information on the results of disabled compression can be found [http://zinc.canonical.com/~cking/power-benchmarking/background-colour-and-framebuffer-compression/results.txt here].<br />
<br />
=== Corruption/Unresponsiveness in Chromium and Firefox ===<br />
<br />
If you experience corruption or unresponsiveness in Chromium and/or Firefox [[#SNA issues|set the AccelMethod to "uxa"]].<br />
<br />
=== Kernel crashing w/kernels 4.0+ on Broadwell/Core-M chips ===<br />
<br />
A few seconds after X/Wayland loads the machine will freeze and journalctl will log a kernel crash referencing the Intel graphics as below:<br />
<br />
Jun 16 17:54:03 hostname kernel: BUG: unable to handle kernel NULL pointer dereference at (null)<br />
Jun 16 17:54:03 hostname kernel: IP: [< (null)>] (null)<br />
...<br />
Jun 16 17:54:03 hostname kernel: CPU: 0 PID: 733 Comm: gnome-shell Tainted: G U O 4.0.5-1-ARCH #1<br />
...<br />
Jun 16 17:54:03 hostname kernel: Call Trace:<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa055cc27>] ? i915_gem_object_sync+0xe7/0x190 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa0579634>] intel_execlists_submission+0x294/0x4c0 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa05539fc>] i915_gem_do_execbuffer.isra.12+0xabc/0x1230 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa055d349>] ? i915_gem_object_set_to_cpu_domain+0xa9/0x1f0 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff811ba2ae>] ? __kmalloc+0x2e/0x2a0<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa0555471>] i915_gem_execbuffer2+0x141/0x2b0 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa042fcab>] drm_ioctl+0x1db/0x640 [drm]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffffa0555330>] ? i915_gem_execbuffer+0x450/0x450 [i915]<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff8122339b>] ? eventfd_ctx_read+0x16b/0x200<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff811ebc36>] do_vfs_ioctl+0x2c6/0x4d0<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff811f6452>] ? __fget+0x72/0xb0<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff811ebec1>] SyS_ioctl+0x81/0xa0<br />
Jun 16 17:54:03 hostname kernel: [<ffffffff8157a589>] system_call_fastpath+0x12/0x17<br />
Jun 16 17:54:03 hostname kernel: Code: Bad RIP value.<br />
Jun 16 17:54:03 hostname kernel: RIP [< (null)>] (null)<br />
<br />
This can be fixed by disabling execlist support which was changed to default on with kernel 4.0. Add the following kernel parameter:<br />
i915.enable_execlists=0<br />
<br />
This is known to be broken to at least kernel 4.0.5.<br />
<br />
===Driver not working for Intel Skylake chips===<br />
<br />
For the driver to work on the new Intel Skylake (6th gen.) GPUs, {{ic|i915.preliminary_hw_support&#61;1}} must be added to your boot parameters.<br />
<br />
== See also ==<br />
<br />
* https://01.org/linuxgraphics/documentation (includes a list of supported hardware)</div>Mousemanhttps://wiki.archlinux.org/index.php?title=NFS/Troubleshooting&diff=311865NFS/Troubleshooting2014-04-26T09:13:38Z<p>Mouseman: /* Permissions issues */</p>
<hr />
<div>[[Category:Networking]]<br />
[[ar:NFS]]<br />
[[de:Network File System]]<br />
[[es:NFS]]<br />
[[fr:NFS]]<br />
[[it:NFSv4]]<br />
[[zh-CN:NFS]]<br />
{{Related articles start}}<br />
{{Related|NFS}}<br />
{{Related articles end}}<br />
<br />
Dedicated article for common problems and solutions.<br />
<br />
== Server-side issues ==<br />
<br />
=== exportfs: /etc/exports:2: syntax error: bad option list ===<br />
<br />
Delete all space from the option list in {{ic|/etc/exports}}<br />
<br />
=== Group/GID permissions issues ===<br />
<br />
If NFS shares mount fine, and are fully accessible to the owner, but not to group members; check the number of groups that user belongs to. NFS has a limit of 16 on the number of groups a user can belong to. If you have users with more than this, you need to enable the {{ic|--manage-gids}} start-up flag for {{ic|rpc.mountd}} on the NFS server.<br />
<br />
{{hc|/etc/conf.d/nfs-server.conf|2=<br />
# Options for rpc.mountd.<br />
# If you have a port-based firewall, you might want to set up<br />
# a fixed port here using the --port option.<br />
# See rpc.mountd(8) for more details.<br />
<br />
MOUNTD_OPTS="--manage-gids"<br />
}}<br />
<br />
=== "Permission denied" when trying to write files ===<br />
<br />
* If you need to mount shares as root, and have full r/w access from the client, add the no_root_squash option to the export in {{ic|/etc/exports}}:<br />
/var/cache/pacman/pkg 192.168.1.0/24(rw,no_subtree_check,no_root_squash)<br />
<br />
== Client-side issues ==<br />
<br />
=== mount.nfs4: No such device ===<br />
<br />
Check that you have loaded the {{ic|nfs}} module<br />
lsmod | grep nfs<br />
and if previous returns empty or only nfsd-stuff, do<br />
# modprobe nfs<br />
<br />
=== mount.nfs4: access denied by server while mounting ===<br />
<br />
NFS shares have to reside in /srv - check your {{ic|/etc/exports}} file and if necessary create the proper folder structure as described in the [[NFS#File_system]] page.<br />
<br />
Check that the permissions on your client's folder are correct. Try using 755.<br />
<br />
or try "exportfs -rav" reload {{ic|/etc/exports}} file.<br />
<br />
=== Unable to connect from OS X clients ===<br />
<br />
When trying to connect from a OS X client, you will see that everything is ok at logs, but MacOS X refuses to mount your NFS share. You have to add {{ic|insecure}} option to your share and re-run {{ic|exportfs -r}}.<br />
<br />
=== Unreliable connection from OS X clients ===<br />
<br />
OS X's NFS client is optimized for OS X Servers and might present some issues with Linux servers. If you are experiencing slow performance, frequent disconnects and problems with international characters edit the default mount options by adding the line {{ic|<nowiki>nfs.client.mount.options = intr,locallocks,nfc</nowiki>}} to {{ic|/etc/nfs.conf}} on your Mac client. More information about the mount options can be found [https://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man8/mount_nfs.8.html#//apple_ref/doc/man/8/mount_nfs here].<br />
<br />
=== Intermittent client freezes when copying large files ===<br />
<br />
If you copy large files from your client machine to the NFS server, the transfer speed is ''very'' fast, but after some seconds the speed drops and your client machine intermittently locks up completely for some time until the transfer is finished.<br />
<br />
Try adding <tt>sync</tt> as a mount option on the client (e.g. in <tt>/etc/fstab</tt>) to fix this problem.<br />
<br />
=== Lock problems ===<br />
<br />
If you got error such as this:<br />
mount.nfs: rpc.statd is not running but is required for remote locking.<br />
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.<br />
mount.nfs: an incorrect mount option was specified<br />
<br />
To fix this, you need to change the "NEED_STATD" value in<br />
{{ic|/etc/conf.d/nfs-common.conf}} to {{ic|YES}}.<br />
<br />
Remember to start all the required services (see [[NFS]]), not just the {{ic|nfs}} service.<br />
<br />
=== mount.nfs: Operation not permitted ===<br />
<br />
After updating to ''nfs-utils'' 1.2.1-2 or higher, mounting NFS shares stopped working. Henceforth, ''nfs-utils'' uses NFSv4 per default instead of NFSv3. The problem can be solved by using either mount option {{ic|1='vers=3'}} or {{ic|1='nfsvers=3'}} on the command line: <br />
# mount.nfs ''remote target'' ''directory'' -o ...,vers=3,...<br />
# mount.nfs ''remote target'' ''directory'' -o ...,nfsvers=3,...<br />
or in {{ic|/etc/fstab}}:<br />
''remote target'' ''directory'' nfs ...,vers=3,... 0 0<br />
''remote target'' ''directory'' nfs ...,nfsvers=3,... 0 0<br />
<br />
== Performance issues ==<br />
<br />
This [http://nfs.sourceforge.net/nfs-howto/ar01s05.html NFS Howto page] has some useful information regarding performance. Here are some further tips:<br />
<br />
=== Diagnose the problem ===<br />
<br />
* '''Htop''' should be your first port of call. The most obvious symptom will be a maxed-out CPU.<br />
* Press F2, and under "Display options", enable "Detailed CPU time". Press F1 for an explanation of the colours used in the CPU bars. In particular, is the CPU spending most of its time responding to IRQs, or in Wait-IO (wio)?<br />
<br />
=== Server threads ===<br />
<br />
'''Symptoms:''' Nothing seems to be very heavily loaded, but some operations on the client take a long time to complete for no apparent reason.<br />
<br />
If your workload involves lots of small reads and writes (or if there are a lot of clients), there may not be enough threads running on the server to handle the quantity of queries. To check if this is the case, run the following command on one or more of the clients:<br />
<br />
{{hc|# nfsstat -rc|<br />
Client rpc stats:<br />
calls retrans authrefrsh<br />
113482 0 113484<br />
}}<br />
<br />
If the {{ic|retrans}} column contains a number larger than 0, the server is failing to respond to some NFS requests, and the number of threads should be increased.<br />
<br />
To increase the number of threads on the server, edit the file {{ic|/etc/conf.d/nfs-server.conf}} and change the value of the {{ic|NFSD_COUNT}} variable. The default number of threads is 8. Try doubling this number until {{ic|retrans}} remains consistently at zero. Don't be afraid of increasing the number quite substantially. 256 threads may be quite reasonable, depending on the workload. You will need to restart the NFS server daemon each time you modify the configuration file. Bear in mind that the client statistics will only be reset to zero when the client is rebooted.<br />
<br />
Use '''htop''' (disable the hiding of kernel threads) to keep an eye on how much work each nfsd thread is doing. If you reach a point where the {{ic|retrans}} values are non-zero, but you can see {{ic|nfsd}} threads on the server doing no work, something different is now causing your bottleneck, and you'll need to re-diagnose this new problem.<br />
<br />
=== Close-to-open/flush-on-close ===<br />
<br />
'''Symptoms:''' Your clients are writing many small files. The server CPU is not maxed out, but there is very high wait-IO, and the server disk seems to be churning more than you might expect.<br />
<br />
In order to ensure data consistency across clients, the NFS protocol requires that the client's cache is flushed (all data is pushed to the server) whenever a file is closed after writing. Because the server is not allowed to buffer disk writes (if it crashes, the client won't realise the data wasn't written properly), the data is written to disk immediately before the client's request is completed. When you're writing lots of small files from the client, this means that the server spends most of its time waiting for small files to be written to its disk, which can cause a significant reduction in throughput.<br />
<br />
See [http://docstore.mik.ua/orelly/networking_2ndEd/nfs/ch07_04.htm this excellent article] or the '''nfs''' manpage for more details on the close-to-open policy. There are several approaches to solving this problem:<br />
<br />
==== The nocto mount option ====<br />
<br />
{{Note|The linux kernel does not seem to honour this option properly. Files are still flushed when they're closed.}}<br />
<br />
Does your situation match these conditions?<br />
<br />
* The export you have mounted on the client is only going to be used by the one client.<br />
* It doesn't matter too much if a file written on one client doesn't immediately appear on other clients.<br />
* It doesn't matter if after a client has written a file, and the client thinks the file has been saved, and then the client crashes, the file may be lost.<br />
<br />
If you're happy with the above conditions, you can use the '''nocto''' mount option, which will disable the close-to-open behaviour. See the '''nfs''' manpage for details.<br />
<br />
==== The async export option ====<br />
<br />
Does your situation match these conditions?<br />
<br />
* It's important that when a file is closed after writing on one client, it is:<br />
** Immediately visible on all the other clients.<br />
** Safely stored on the server, even if the client crashes immediately after closing the file.<br />
* It's not important to you that if the server crashes:<br />
** You may loose the files that were most recently written by clients.<br />
** When the server is restarted, the clients will believe their recent files exist, even though they were actually lost.<br />
<br />
In this situation, you can use {{ic|async}} instead of {{ic|sync}} in the server's {{ic|/etc/exports}} file for those specific exports. See the '''exports''' manual page for details. In this case, it does not make sense to use the {{ic|nocto}} mount option on the client.<br />
<br />
=== Buffer cache size and MTU ===<br />
<br />
'''Symptoms:''' High kernel or IRQ CPU usage, a very high packet count through the network card.<br />
<br />
This is a trickier optimisation. Make sure this is definitely the problem before spending too much time on this. The default values are usually fine for most situations.<br />
<br />
See [http://docstore.mik.ua/orelly/networking_2ndEd/nfs/ch07_03.htm this excellent article] for information about I/O buffering in NFS. Essentially, data is accumulated into buffers before being sent. The size of the buffer will affect the way data is transmitted over the network. The Maximum Transmission Unit (MTU) of the network equipment will also affect throughput, as the buffers need to be split into MTU-sized chunks before they're sent over the network. If your buffer size is too big, the kernel or hardware may spend too much time splitting it into MTU-sized chunks. If the buffer size is too small, there will be overhead involved in sending a very large number of small packets. You can use the '''rsize''' and '''wsize''' mount options on the client to alter the buffer cache size. To achieve the best throughput, you need to experiment and discover the best values for your setup.<br />
<br />
It is possible to change the MTU of many network cards. If your clients are on a separate subnet (e.g. for a Beowulf cluster), it may be safe to configure all of the network cards to use a high MTU. This should be done in very-high-bandwidth environments.<br />
<br />
See also the '''nfs''' manual page for more about '''rsize''' and '''wsize'''.<br />
<br />
== Other issues ==<br />
<br />
=== Permissions issues ===<br />
<br />
If you find that you cannot set the permissions on files properly, make sure the user/group you are chowning are on both the client and server.<br />
<br />
If all your files are owned by {{ic|nobody}}, and you are using NFSv4, on both the client and server, you should:<br />
* For systemd, ensure that the {{ic|rpc-idmapd}} service has been started.<br />
* For initscripts, ensure that {{ic|NEED_IDMAPD}} is set to {{ic|YES}} in {{ic|/etc/conf.d/nfs-common.conf}}.<br />
<br />
On some systems detecting the domain from FQDN minus hostname does not seem to work reliably. If files are still showing as {{ic|nobody}} after the above changes, edit /etc/idmapd.conf, ensure that {{ic|Domain}} is set to {{ic|FQDN minus hostname}}. For example:<br />
<br />
{{hc|/etc/idmapd.conf|2=<br />
[General]<br />
<br />
Verbosity = 7<br />
Pipefs-Directory = /var/lib/nfs/rpc_pipefs<br />
Domain = yourdomain.local<br />
<br />
[Mapping]<br />
<br />
Nobody-User = nobody<br />
Nobody-Group = nobody<br />
<br />
[Translation]<br />
<br />
Method = nsswitch<br />
}}<br />
<br />
Please refer to [[Nfs#ID_mapping]] for more information.</div>Mouseman