https://wiki.archlinux.org/api.php?action=feedcontributions&user=Veox&feedformat=atomArchWiki - User contributions [en]2024-03-28T08:49:36ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Dark_mode_switching&diff=728386Dark mode switching2022-05-02T14:29:40Z<p>Veox: /* Qt */ Fix in-wiki link (dest has space added).</p>
<hr />
<div>[[Category:Eye candy]]<br />
[[Category:Accessibility]]<br />
[[ja:ダークモードの切り替え]]<br />
[[ru:Dark mode switching]]<br />
[[zh-hans:Dark mode switching]]<br />
Switching between light and dark modes/themes is nice to have. It allows you to switch to dark mode on sunset or toggle modes with a keyboard shortcut.<br />
<br />
The switch between themes can apply to currently running programs, probably requiring a daemon, or only to newly launched ones.<br />
This article focuses on switching at runtime, so toggling during use affects currently running programs.<br />
<br />
Switching between light and dark mode requires support from applications or application toolkits like [[GTK]] and [[Qt]].<br />
<br />
== Tools ==<br />
<br />
* {{App|[https://gitlab.com/WhyNotHugo/darkman/ darkman]|darkman is a tool that allows automating transitioning to dark mode at sundown, and back to light mode at sunrise. It allows placing drop-in scripts to be run automatically at those times.|https://gitlab.com/WhyNotHugo/darkman/|{{AUR|darkman}}}}<br />
*{{App|[https://github.com/oskarsh/Yin-Yang Yin-Yang]|Yin-Yang is another tool with a similar feature set, running custom scripts is, however, not supported yet.|https://github.com/oskarsh/Yin-Yang|{{AUR|yin-yang-git}}}}<br />
<br />
== Toolkits ==<br />
<br />
=== GTK ===<br />
<br />
To change the light/dark mode, you have to change the used theme.<br />
<br />
Most themes do have a dark variant and those have by convention the suffix {{ic|-dark}}. For example the default [[GTK]] theme {{ic|Adwaita}} has the variant {{ic|Adwaita-dark}}.<br />
<br />
To permanently change to the dark variant, see [[GTK#Dark theme variant]]<br />
<br />
To switch themes instantly for running programs, either a daemon providing the [https://www.freedesktop.org/wiki/Specifications/xsettings-spec/ xsettings spec] or gsettings is required. For desktops running with [[Xorg]], an xsettings daemon is needed. For desktops running with [[Wayland]], gsettings is queried.<br />
<br />
==== xsettings daemon ====<br />
<br />
xsettings is queried for [[Xorg]] sessions<br />
<br />
The xsettings daemon from [[Xfce]] is [https://docs.xfce.org/xfce/xfce4-settings/xfsettingsd xfsettingsd], which is provided by the {{Pkg|xfce4-settings}} package.<br />
<br />
To query current GTK theme:<br />
$ xfconf-query -c xsettings -p /Net/ThemeName<br />
<br />
To set GTK theme:<br />
$ xfconf-query -c xsettings -p /Net/ThemeName -s "new-theme"<br />
<br />
Changes to this entry are instant and affect all GTK applications.<br />
<br />
==== gsettings ====<br />
<br />
gsettings is queried for [[Wayland]] sessions.<br />
<br />
To query the current GTK theme:<br />
$ gsettings get org.gnome.desktop.interface gtk-theme<br />
<br />
To set GTK theme:<br />
$ gsettings set org.gnome.desktop.interface gtk-theme "new-theme"<br />
<br />
Changes to this entry are instant and affect all GTK applications.<br />
<br />
=== Qt ===<br />
<br />
[[Qt]] has theme support similar to GTK.<br />
<br />
One method to theme Qt applications is [[Uniform_look_for_Qt_and_GTK_applications#QGtkStyle|using GTK for styling]].<br />
Changes to the GTK theme then affect Qt applications as well.<br />
<br />
Another method is using a native Qt theme, e.g. {{AUR|adwaita-qt}}. To switch between themes, you can follow [[Qt#Configuration of Qt 5 applications under environments other than KDE Plasma]].<br />
<br />
== Applications ==<br />
<br />
=== Firefox ===<br />
<br />
[[Firefox]] automatically uses the current GTK theme mode and adapts the appearance of the browser accordingly. See [[Firefox#Dark themes]] for some more settings and caveats.<br />
<br />
To change web content smartly, the [https://addons.mozilla.org/firefox/addon/darkreader/ Dark Reader] Add-On is recommended.<br />
<br />
By setting {{ic|Automation}} to {{ic|Use system color scheme}}, Dark Reader activates automatically with dark GTK themes.<br />
<br />
=== Thunderbird ===<br />
<br />
[[Thunderbird]] conforms with the current GTK Theme, but some changes are recommended.<br />
<br />
See [[Thunderbird#Theming tweaks]].<br />
<br />
=== Visual Studio Code ===<br />
<br />
To change the theme in [[Visual Studio Code]], [https://github.com/sandygk/woof-configs/blob/master/bin/set_theme_vscode this script] may help.<br />
<br />
=== Alacritty ===<br />
<br />
[[Alacritty]] has support for multiple custom color schemes. The configuration syntax and published color schemes can be found [https://github.com/alacritty/alacritty/wiki/Color-schemes here].<br />
<br />
To quickly change theme, you should declare a pointer to each color scheme, for example {{ic|&black}}. Then you can switch to a color scheme by simply setting {{ic|colors: *black}}. This change to the configuration file is instant and affects all currently running instances. If not, you may have to set {{ic|live_config_reload: true}}.<br />
<br />
The borders and title bar are themed with GTK. To abide by the GTK theme, you should set the setting {{ic|gtk_theme_variant}} to the default {{ic|None}}.</div>Veoxhttps://wiki.archlinux.org/index.php?title=Nextcloud&diff=727816Nextcloud2022-04-28T13:04:04Z<p>Veox: /* Desktop */ minor wording</p>
<hr />
<div>[[Category:File sharing]]<br />
[[Category:Web applications]]<br />
[[ja:Nextcloud]]<br />
{{Related articles start}}<br />
{{Related|Apache HTTP Server}}<br />
{{Related|Nginx}}<br />
{{Related|UWSGI}}<br />
{{Related|OpenSSL}}<br />
{{Related|WebDAV}}<br />
{{Related articles end}}<br />
<br />
From [[Wikipedia:Nextcloud]]:<br />
<br />
:Nextcloud is a suite of client-server software for creating and using file hosting services. It is functionally similar to Dropbox, although Nextcloud is free and open-source, allowing anyone to install and operate it on a private server. In contrast to proprietary services like Dropbox, the open architecture allows adding additional functionality to the server in form of applications.<br />
<br />
Nextcloud is a fork of ownCloud. For differences between the two, see [[Wikipedia:Nextcloud#Differences from ownCloud]].<br />
<br />
{{Note|For version 23 Nextcloud's PHP code has been extensively patched for the Arch Linux package to achieve PHP 8.1 compatibility. The downside of this is that Nextcloud's built-in code integrity check fails for all patched files. The corresponding warnings in the admin's view can be ignored. Version 24 is [https://github.com/nextcloud/server/issues/29287#issuecomment-1019277996 expected to be compatible] with PHP 8.1 and is scheduled for [https://github.com/nextcloud/server/wiki/Maintenance-and-Release-Schedule#nextcloud-24 early May].}}<br />
<br />
== Setup overview ==<br />
<br />
A complete installation of Nextcloud comprises (at least) the following components:<br />
<br />
'''A web server''' paired with '''an application server''' on which runs '''Nextcloud''' (i.e. the PHP code) using '''a database'''. <br />
<br />
This article will cover [[MariaDB]]/MySQL and [[PostgreSQL]] as databases and the following combinations of web server and application server:<br />
<br />
* nginx &rarr; uWSGI (plus uwsgi-plugin-php)<br />
* nginx &rarr; php-fpm,<br />
* Apache (using mod_proxy_uwsgi) &rarr; uWSGI (plus uwsgi-plugin-php)<br />
* Apache (using mod_proxy_fcgi) &rarr; php-fpm<br />
<br />
The Nextcloud package complies with the [[web application package guidelines]]. Among other things this mandates that the web application to be run with a dedicated user - in this case {{ic|nextcloud}}. This is one of the reasons why the application server comes into play here. For the very same reason it is not possible anymore to execute Nextcloud's PHP code directly in the Apache process by means of {{pkg|php-apache}}.<br />
<br />
== Installation ==<br />
<br />
Install the {{Pkg|nextcloud}} package. This will pull in quite a few dependent packages. All [https://docs.nextcloud.com/server/stable/admin_manual/installation/source_installation.html#prerequisites-for-manual-installation required PHP extensions] will be taken care of this way. Additionally install the recommended packages {{Pkg|php-imagick}} for preview generation and {{Pkg|php-intl}} for increased translation performance and fixed sorting (preferrably as dependent package with pacman option {{ic|--asdeps}}). Other optional dependencies will be covered later depending on your concrete setup (e.g. which database you choose).<br />
<br />
== Configuration ==<br />
<br />
=== PHP ===<br />
<br />
This guide does not tamper with PHP's central configuration file {{ic|/etc/php/php.ini}} but instead puts Nextcloud specific PHP configuration in places where it does not potentially interfere with settings for other PHP based applications. These places are:<br />
<br />
* A dedicated copy of {{ic|php.ini}} in {{ic|/etc/webapps/nextcloud/php.ini}} (for the {{ic|occ}} command line tool and the background job).<br />
* Corresponding settings in the configuration of the application server. These will be covered in the section about application servers.<br />
<br />
Make a copy of {{ic|/etc/php/php.ini}} in {{ic|/etc/webapps/nextcloud}}. Although not strictly necessary change ownership of the copy:<br />
<br />
{{bc|chown nextcloud:nextcloud /etc/webapps/nextcloud/php.ini}}<br />
<br />
Most of the prerequisites listed in Nextcloud's [https://docs.nextcloud.com/server/stable/admin_manual/installation/source_installation.html#prerequisites-for-manual-installation installation instructions] are already enabled in a bare PHP installation. Additionally enable the following extensions:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
extension=bcmath<br />
extension=bz2<br />
extension=exif<br />
extension=gd<br />
extension=iconv<br />
; in case you installed php-imagick (as recommended)<br />
extension=imagick<br />
; in case you also installed php-intl (as recommended)<br />
extension=intl<br />
}}<br />
<br />
Set {{ic|date.timezone}} to your preferred timezone, e.g.:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
date.timezone = Europe/Berlin<br />
}}<br />
<br />
Raise PHP's memory limit to at least 512MiB:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
memory_limit = 512M<br />
}}<br />
<br />
Optional: For additional security configure {{ic|open_basedir}}. This limits the locations where Nextcloud's PHP code can read and write files. Proven settings are<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
open_basedir=/var/lib/nextcloud/data:/var/lib/nextcloud/apps:/tmp:/usr/share/webapps/nextcloud:/etc/webapps/nextcloud:/dev/urandom:/usr/lib/php/modules:/var/log/nextcloud:/proc/meminfo<br />
}}<br />
<br />
Depending on which additional extensions you configure you may need to extend this list, e.g. {{ic|/run/redis}} in case you opt for [[Redis]].<br />
<br />
It is not necessary to configure opcache here as this {{ic|php.ini}} is only used by the {{ic|occ}} command line tool and the background job, i.e. by short running PHP processes.<br />
<br />
=== Nextcloud ===<br />
<br />
Add the following entries to Nextcloud's configuration file:<br />
<br />
{{hc|/etc/webapps/nextcloud/config/config.php|2=<nowiki><br />
'trusted_domains' =><br />
array (<br />
0 => 'localhost',<br />
1 => 'cloud.example.org',<br />
), <br />
'overwrite.cli.url' => 'https://cloud.example.org/',<br />
'htaccess.RewriteBase' => '/',<br />
</nowiki>}}<br />
<br />
Adapt the given example hostname {{ic|cloud.example.com}}. In case your Nextcloud installation will be reachable via a subfolder (e.g. {{ic|<nowiki>https://www.example.com/nextcloud</nowiki>}}) {{ic|overwrite.cli.url}} and {{ic|htaccess.RewriteBase}} have to be modified accordingly.<br />
<br />
=== System and environment ===<br />
<br />
To make sure the Nextcloud specific {{ic|php.ini}} is used by the {{ic|occ}} tool set the environment variable {{ic|NEXTCLOUD_PHP_CONFIG}}:<br />
<br />
{{bc|1=<br />
export NEXTCLOUD_PHP_CONFIG=/etc/webapps/nextcloud/php.ini<br />
}}<br />
<br />
Also add this line to your {{ic|.bashrc}} to make this setting permanent.<br />
<br />
As a privacy and security precaution create the dedicated directory for session data:<br />
<br />
{{bc|1=<br />
install --owner=nextcloud --group=nextcloud --mode=700 -d /var/lib/nextcloud/sessions<br />
}}<br />
<br />
== Database ==<br />
<br />
[[MariaDB]]/MySQL is the canonical choice for Nextcloud.<br />
<br />
:The MySQL or MariaDB databases are the recommended database engines.[https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/linux_database_configuration.html]<br />
<br />
Most information concerning databases with Nextcloud deals with MariaDB/MySQL. The Nextcloud developers admit to have [https://github.com/nextcloud/server/issues/5912#issuecomment-318568370| less detailed expertise] with other databases. <br />
<br />
[[PostgreSQL]] is said to deliver better performance and overall has fewer quirks compared to MariaDB/MySQL. [[SQLite]] is mainly supported for test / development installations and not recommended for production. The [https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/linux_database_configuration.html list of supported databases] also contains [[Oracle Database]]. This product will not be covered here.<br />
<br />
=== MariaDB / MySQL ===<br />
<br />
Since [[MariaDB]] has been the default MySQL implementation in Arch Linux since 2013[https://archlinux.org/news/mariadb-replaces-mysql-in-repositories/] this text only mentions MariaDB.<br />
<br />
In case you want to run your database on the same host as Nextcloud install {{Pkg|mariadb}} (if you have not done so already). See the corresponding [[MariaDB|article]] for details. Do not forget to initialize MariaDB with {{ic|mariadb-install-db}}. It is recommended for additional security to configure MariaDB to [[MariaDB#Enable_access_locally_only_via_Unix_sockets|only listen on a local Unix socket]]:<br />
<br />
{{hc|/etc/my.cnf.d/server.cnf|2=<br />
[mysqld]<br />
skip_networking<br />
}}<br />
<br />
{{Note|Surprisingly Nextcloud is not compatible with MariaDB version 10.6 or higher (see {{Bug|71549}}). This is due to MariaDB forcing read-only for compressed InnoDB tables[https://mariadb.com/kb/en/innodb-compressed-row-format/#read-only] and Nextcloud using these kind of tables:<br />
<br />
:From MariaDB 10.6.0, tables that are of the {{ic|COMPRESSED}} row format are read-only by default. This is the first step towards removing write support and deprecating the feature.<br />
<br />
Upstream is aware of this [https://github.com/nextcloud/server/issues/25436 problem] but a [https://github.com/nextcloud/server/issues/25436#issuecomment-883213001 quick fix seems unlikely].<br />
<br />
One easy remedy for this issue is to allow write access to compressed InnoDB tables again by means of MariaDB's system variable [https://mariadb.com/docs/reference/mdb/system-variables/innodb_read_only_compressed/ innodb_read_only_compressed]. Just add the following section to your configuration of MariaDB:<br />
<br />
{{hc|/etc/my.cnf.d/server.cnf|2=<br />
[mariadb-10.6]<br />
innodb_read_only_compressed=OFF<br />
}}<br />
}}<br />
<br />
Nextcloud's own documentation [https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/linux_database_configuration.html#database-read-committed-transaction-isolation-level recommends] to set the transaction isolation level to READ-COMMITTED. This is especially important when you expect high load with many concurrent transactions.<br />
<br />
{{hc|/etc/my.cnf.d/server.cnf|2=<br />
[mysqld]<br />
transaction_isolation=READ-COMMITTED}}<br />
<br />
The other recommendation to set {{ic|1=binlog_format=ROW}} is obsolete. The default {{ic|MIXED}} in recent MariaDB versions is at least as good as the recommended {{ic|ROW}}. In any case the setting is only relevant when replication is applied.<br />
<br />
Start the CLI tool {{ic|mysql}} with database user root. (Default password is empty, but hopefully you change it as soon as possible.)<br />
<br />
{{bc|mysql -u root -p}}<br />
<br />
Create the user and database for Nextcloud with <br />
<br />
{{bc|<br />
CREATE USER 'nextcloud'@'localhost' IDENTIFIED BY 'xxxxxxxx';<br />
CREATE DATABASE IF NOT EXISTS nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;<br />
GRANT ALL PRIVILEGES on nextcloud.* to 'nextcloud'@'localhost';<br />
FLUSH privileges;}}<br />
<br />
({{ic|XXXXXXXX}} is a placeholder for the actual password of DB user ''nextcloud'' you must choose.) Quit the tool with {{ic|\q}}.<br />
<br />
{{Note|MariaDB has a flawed understanding of what UTF8 means resulting in the inability to store any characters with codepoints 0x10000 and above (e.g. emojis). They 'fixed' this issue with version 5.5 by introducing a new encoding called ''utf8mb4''. Bottom line: Never ever use MariaDB's ''utf8'', always use ''utf8mb4''. In case you need to migrate see [https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/mysql_4byte_support.html].}}<br />
<br />
So that you have decided to use MariaDB as the database of your Nextcloud installation you have to enable the corresponding PHP extension:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
extension=pdo_mysql<br />
}}<br />
<br />
Further configuration (related to MariaDB) is not required (contrary to the information given in Nextcloud's [https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/linux_database_configuration.html#configuring-a-mysql-or-mariadb-database admin manual]).<br />
<br />
Now setup Nextcloud's database schema with:<br />
<br />
{{bc|1=<br />
occ maintenance:install \<br />
--database=mysql \<br />
--database-name=nextcloud \<br />
--database-host=localhost:/run/mysqld/mysqld.sock \<br />
--database-user=nextcloud \<br />
--database-pass=xxxxxxxx \<br />
--admin-pass=zzzzzzzz \<br />
--admin-email=aaaa@bbbbb \<br />
--data-dir=/var/lib/nextcloud/data<br />
}}<br />
<br />
Mind the placeholders (e.g. {{ic|xxxxxxxx}}) and replace them with appropriate values. This command assumes that you run your database on the same host as Nextcloud. Enter {{ic|occ help maintenance:install }} and see Nextcloud's [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html#command-line-installation documentation] for other options.<br />
<br />
=== PostgreSQL ===<br />
<br />
Consult the corresponding [[PostgreSQL|article]] for detailed information about PostgreSQL. In case you want to run your database on the same host as Nextcloud install {{Pkg|postgresql}} (if you have not done so already). For additional security in this scenario it is recommended to configure PostgreSQL to [[PostgreSQL#Configure_PostgreSQL_to_be_accessible_exclusively_through_UNIX_Sockets|only listen on a local UNIX socket]]:<br />
<br />
{{hc|/var/lib/postgres/data/postgresql.conf|2=<br />
listen_addresses = <nowiki>''</nowiki><br />
}}<br />
<br />
Especially do not forget to initialize your database with {{ic|initdb}}. After having done so start PostgreSQL's CLI tool {{ic|psql}}<br />
<br />
{{bc|<br />
runuser -u postgres -- psql<br />
}}<br />
<br />
and create the database user {{ic|nextcloud}} and the database of the same name<br />
<br />
{{bc|1=<br />
CREATE USER nextcloud WITH PASSWORD 'xxxxxxxx';<br />
CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';<br />
ALTER DATABASE nextcloud OWNER TO nextcloud;<br />
GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;<br />
\q<br />
}}<br />
<br />
({{ic|xxxxxxxx}} is a placeholder for the passwort of database user ''nextcloud'' that you have to choose.)<br />
<br />
Install the additional package {{Pkg|php-pgsql}} as dependency (pacman option {{ic|--asdeps}}) and enable the corresponding PHP extension in {{ic|/etc/webapps/nextcloud/php.ini}}:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
extension=pdo_pgsql<br />
}}<br />
<br />
Now setup Nextcloud's database schema with:<br />
<br />
{{bc|1=<br />
occ maintenance:install \<br />
--database=pgsql \<br />
--database-name=nextcloud \<br />
--database-host=/run/postgresql \<br />
--database-user=nextcloud \<br />
--database-pass=xxxxxxxx \<br />
--admin-pass=zzzzzzzz \<br />
--admin-email=aaaa@bbbbb \<br />
--data-dir=/var/lib/nextcloud/data<br />
}}<br />
<br />
Mind the placeholders (e.g. {{ic|xxxxxxxx}}) and replace them with appropriate values. This command assumes that you run your database on the same host as Nextcloud. Enter {{ic|occ help maintenance:install }} and see Nextcloud's [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html#command-line-installation documentation] for other options.<br />
<br />
== Application server ==<br />
<br />
There are two prevalent application servers that can be used to process PHP code: [[uWSGI]] or [https://cwiki.apache.org/confluence/display/HTTPD/PHP-FPM php-fpm]. ''php-fpm'' as the name suggests is specialized on PHP. The protocol used between the web server and ''php-fpm'' is ''fastcgi''. The tool's [https://www.php.net/manual/en/install.fpm.php documentation] leaves room for improvement. ''uWSGI'' on the other hand can serve code written in a [https://uwsgi-docs.readthedocs.io/en/latest/LanguagesAndPlatforms.html handful of languages] by means of language specific plugins. The protocol used is ''uwsgi'' (lowercase). The tool is [https://uwsgi-docs.readthedocs.io/en/latest/index.html extensively documented] - albeit the sheer amount of documentation can become confusing and unwieldy.<br />
<br />
{{Warning|It has to be mentioned that maintenance of uWSGI and especially of its PHP plugin has been sparse lately[https://github.com/unbit/uwsgi/issues/2287]. This has already caused [https://bugs.archlinux.org/task/73470 issues] that could only be solved by patching uWSGI code by the maintainers of Arch Linux packages, i.e. not upstream.}}<br />
<br />
=== uWSGI ===<br />
<br />
uWSGI has its own [[uWSGI|article]]. A lot of useful information can be found there. Install {{pkg|uwsgi}} and the plugin {{pkg|uwsgi-plugin-php}} - preferrably as dependencies, i.e. with {{ic|--asdeps}}. To run Nextcloud's code with (or in) uWSGI you have to configure one uWSGI specific configuration file ({{ic|nextcloud.ini}}) and define one systemd service.<br />
<br />
'''nextcloud.ini'''<br />
<br />
The {{Pkg|nextcloud}} package includes a sample configuration file already in the right place {{ic|/etc/uwsgi/nextcloud.ini}}. In almost any case you will have to adapt this file to your requirements and setup. Find a [https://gist.githubusercontent.com/wolegis/fc0c01882b694777a6565aa1d0a4da47 version with lots of commented changes] (compared to the package's version). It assumes a no-frills Nextcloud installation for private use (i.e. with moderate load).<br />
<br />
See section [[#Background jobs|Background jobs]] for arguments why not to configure recurring jobs in this file. In general keep the enabled extensions, extension specific settings and {{ic|open_basedir}} in sync with {{ic|/etc/webapps/nextcloud/php.ini}} (with the exception of opcache).<br />
<br />
{{Tip|The changes to {{ic|/etc/uwsgi/nextcloud.ini}} can become extensive. A file named {{ic|nextcloud.ini.pacnew}} will be created during package update in case there are changes in the original file provided by the package {{Pkg|nextcloud}}. In order to better track changes in this latter file and apply them to {{ic|/etc/uwsgi/nextcloud.ini}} the following approach can be applied:<br />
<br />
* Make a copy of the file as provided by the package (e.g. by extracting from the package) and store it as {{ic|nextcloud.ini.package}}.<br />
* In case an update of package {{Pkg|nextcloud}} produces a {{ic|nextcloud.ini.pacnew}} you can identify the changes with {{ic|diff nextcloud.ini.package nextcloud.ini.pacnew}}.<br />
* Selectively apply the changes to your {{ic|nextcloud.ini}} depending on whether they make sense with your version or not.<br />
* Move {{ic|nextcloud.ini.pacnew}} over {{ic|nextcloud.ini.package}}.<br />
}}<br />
<br />
'''Enable and start'''<br />
<br />
The package {{pkg|uwsgi}} provides a template unit file ({{ic|uwsgi@.service}}). The instance ID (here ''nextcloud'') is used to pick up the right configuration file. [[Enable/start]] {{ic|uwsgi@nextcloud.service}}. <br />
<br />
In case you have more than a few (e.g. 2) services started like this and get the impression this is a waste of resource you might consider using [https://uwsgi-docs.readthedocs.io/en/latest/Emperor.html emperor mode].<br />
<br />
=== php-fpm ===<br />
<br />
In case you opt to use ''php-fpm'' as your application server install {{pkg|php-fpm}} - preferrably as a dependent package ({{ic|--asdeps}}).<br />
<br />
Configuration consists of a copy of {{ic|php.ini}} relevant for all applications served by ''php-fpm'' and a so-called pool file specific for the application (here Nextcloud). Finally you have to tweak the systemd service file.<br />
<br />
'''php-fpm.ini'''<br />
<br />
As stated earlier this article avoids modifications of PHP's central configuration in {{ic|/etc/php/php.ini}}. Instead create a ''php-fpm'' specific copy.<br />
<br />
{{bc|cp /etc/php/php.ini /etc/php/php-fpm.ini}}<br />
<br />
Make sure it is owned and only writeable by root ({{ic|-rw-r--r-- 1 root root ... php-fpm.ini}}). Enable the op-cache, i.e. uncomment the line<br />
<br />
{{bc|1=;zend_extension=opcache}}<br />
<br />
and put the following parameters below the existing line {{ic|[opcache]}}:<br />
<br />
{{hc|/etc/php/php-fpm.ini|2=opcache.enable = 1<br />
opcache.interned_strings_buffer = 8<br />
opcache.max_accelerated_files = 10000<br />
opcache.memory_consumption = 128<br />
opcache.save_comments = 1<br />
opcache.revalidate_freq = 1}}<br />
<br />
{{Warning|Do not try to put these settings in the pool file by means of {{ic|php_value[...]}} and {{ic|php_flag[...]}}. Your ''php-fpm'' processes will consistently crash with the very first request.}}<br />
<br />
'''nextcloud.conf pool file'''<br />
<br />
Next you have to create a so called pool file for ''php-fpm''. It is responsible for spawning dedicated ''php-fpm'' processes for the Nextcloud application. Create a file {{ic|/etc/php/php-fpm.d/nextcloud.conf}} - you may use this [https://gist.githubusercontent.com/wolegis/0d9c83acd0c8bf83bcfb3983931bc364 functional version] as a starting point.<br />
<br />
Again make sure this pool file is owned and only writeable by root (i.e. {{ic|-rw-r--r-- 1 root root ... nextcloud.conf}}). Adapt or add settings (especially {{ic|pm...}}, {{ic|php_value[...]}} and {{ic|php_flag[...]}}) to your liking. The settings {{ic|php_value[...]}} and {{ic|php_flag[..]}} must be consistent with the corresponding settings in {{ic|/etc/webapps/nextcloud/php.ini}} (but not {{ic|/etc/php/php-fpm.ini}}).<br />
<br />
The settings done by means of {{ic|php_value[...]}} and {{ic|php_flag[...]}} could instead be specified in {{ic|php-fpm.ini}}. But mind that settings in {{ic|php-fpm.ini}} apply for all applications served by ''php-fpm''.<br />
<br />
{{Tip|The package {{pkg|php-fpm}} comes with its own pool file {{ic|www.conf}} that is of little use here. A good approach to get rid of it is to rename it to {{ic|www.conf.package}} and create a file {{ic|www.conf}} with only comment lines (lines starting with a semicolon). This way {{ic|www.conf}} becomes a no-op. It is also not overwritten during installation of a new version of {{pkg|php-fpm}}. Instead a file {{ic|www.conf.pacnew}} is created. You can compare this against {{ic|www.conf.package}} to see if anything significant has changed in the pool file that you may have to reproduce in {{ic|nextcloud.conf}}. Do not forget to rename {{ic|www.conf.pacnew}} to {{ic|www.conf.package}} at the end of this procedure.}}<br />
<br />
'''php-fpm service'''<br />
<br />
''php-fpm'' is (of course) run as a systemd service. You have to modify the service configuration to be able to run Nextcloud. This is best achieved by means of a [[drop-in file]] and add:<br />
<br />
{{hc|/etc/systemd/system/php-fpm.service.d/override.conf|2=<br />
[Service]<br />
ExecStart=<br />
ExecStart=/usr/bin/php-fpm --nodaemonize --fpm-config /etc/php/php-fpm.conf --php-ini /etc/php/php-fpm.ini<br />
ReadWritePaths=/var/lib/nextcloud<br />
ReadWritePaths=/etc/webapps/nextcloud/config<br />
}}<br />
<br />
* It replaces the {{ic|ExecStart}} line by a start command that uses the {{ic|php-fpm.ini}} covered in the previous section.<br />
* The directories {{ic|/var/lib/nextcloud}} and {{ic|/etc/webapps/nextcloud/config}} (and everything below) are made writable. The {{ic|1=ProtectSystem=full}} in the original service definition causes {{ic|/usr}}, {{ic|/boot}} and {{ic|/etc}} to be mounted read-only for the ''php-fpm'' processes.<br />
<br />
'''Enable and start'''<br />
<br />
Do not forget to [[enable]] and [[start]] the ''php-fpm'' service.<br />
<br />
'''Keep /etc tidy'''<br />
<br />
The Nextcloud package unconditionally creates the uWSGI configuration file {{ic|/etc/uwsgi/nextcloud.ini}}. Of course it is of no use when you run ''php-fpm'' instead of ''uWSGI'' (and it does no harm whatsoever). In case you nevertheless want to get rid of it just add the following lines to {{ic|/etc/pacman.conf}}<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
# uWSGI configuration that comes with Nextcloud is not needed<br />
NoExtract = etc/uwsgi/nextcloud.ini}}<br />
<br />
== Web server ==<br />
<br />
There is an abundance of web servers you can choose from. Whatever option you finally pick you have to keep in mind that the Nextcloud application needs to be run with its own system user ''nextcloud''. So you will need to forward your requests to one of the above mentioned application servers.<br />
<br />
=== nginx ===<br />
<br />
Configuration of ''nginx'' is way beyond the scope of this article. See the relevant [[nginx|article]] for further information. Also consult [https://docs.nextcloud.com/server/latest/admin_manual/installation/nginx.html Nextcloud's documentation] for an elaborated configuration. Most likely you will have to copy it into a file with an appropriate name below {{ic|/etc/nginx/sites-available}} and create the corresponding symbolic link in {{ic|/etc/nginx/sites-enabled}}.<br />
<br />
The usage of the block {{ic|upstream php-handler { ... } }} is not necessary. Just specify {{ic|fastcgi_pass unix:/run/php-fpm/nextcloud.sock;}} in the {{ic|location}} block that deals with forwarding request with PHP URIs to the application server. When using ''uWSGI'' instead of ''php-fpm'' replace this {{ic|location}} block with:<br />
<br />
{{bc|<br />
location ~ \.php(?:${{!}}/) {<br />
include uwsgi_params;<br />
uwsgi_modifier1 14;<br />
# Avoid duplicate headers confusing OC checks<br />
uwsgi_hide_header X-Frame-Options;<br />
uwsgi_hide_header X-XSS-Protection;<br />
uwsgi_hide_header X-Content-Type-Options;<br />
uwsgi_hide_header X-Robots-Tag;<br />
uwsgi_hide_header X-Download-Options;<br />
uwsgi_hide_header X-Permitted-Cross-Domain-Policies;<br />
uwsgi_pass unix:/run/uwsgi/nextcloud.sock;<br />
}<br />
}}<br />
<br />
Things you might have to adapt (not exhaustive):<br />
<br />
* Your server name ({{ic|server_name}} clauses 2x), i.e. the server part of the URL your Nextcloud installation will be reachable with.<br />
* The name of the certificate and key you use for SSL / TLS.<br />
* If and where you want an access log written to.<br />
* The location where [[Certbot]] (or any other ACME client) will put the domain verification challenges. Usage of {{ic|alias}} instead of {{ic|try_files}} is probably more adequate here.<br />
* The path used to reach your Nextcloud installation. (The part right to the server name &amp; port section in the URL.)<br />
* What application server (uWSGI or php-fpm) you are using, i.e. how and where nginx will pass requests that need to trigger some PHP code. (See above.)<br />
* Configure [[Wikipedia:OCSP_stapling|OCSP stapling]].<br />
<br />
There is no need to install any additional modules since nginx natively supports both protocols FastCGI and uwsgi.<br />
<br />
=== Apache httpd ===<br />
<br />
Find lots of useful information in the article about the [[Apache_HTTP_Server|Apache HTTP Server]]. Nextcloud's documentation has some [https://docs.nextcloud.com/server/latest/admin_manual/installation/source_installation.html#apache-web-server-configuration sample configuration] that can also be found in {{ic|/usr/share/doc/nextcloud/apache.example.conf}}. Both implicitely rely on ''mod_php'' that cannot be used anymore. ''mod_proxy_fcgi'' or ''mod_proxy_uwsgi'' need to be applied.<br />
<br />
Information about how to [[Apache_HTTP_Server#Using_php-fpm_and_mod_proxy_fcgi|integrate Apache with php-fpm]] can be found here in this wiki. uWSGI's documentation has some information about how to [https://uwsgi-docs.readthedocs.io/en/latest/Apache.html integrate Apache with PHP by means of uWSGI and mod_proxy_uwsgi]. Mind that the Apache package comes with both modules ''mod_proxy_fcgi'' and ''mod_proxy_uwsgi''. They need to be loaded as required.<br />
<br />
The following Apache modules are required to run Nextcloud:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
# these are already loaded in a standard Apache installation<br />
LoadModule headers_module modules/mod_headers.so<br />
LoadModule env_module modules/mod_env.so<br />
LoadModule dir_module modules/mod_dir.so<br />
LoadModule mime_module modules/mod_mime.so<br />
LoadModule setenvif_module modules/mod_setenvif.so<br />
<br />
# these need to be uncommented explicitely<br />
LoadModule rewrite_module modules/mod_rewrite.so<br />
LoadModule ssl_module modules/mod_ssl.so<br />
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so<br />
LoadModule proxy_module modules/mod_proxy.so<br />
<br />
# either this one in case you use php-fpm<br />
LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so<br />
# or this one in case you opt for uWSGI<br />
LoadModule proxy_uwsgi_module modules/mod_proxy_uwsgi.so<br />
}}<br />
<br />
Also uncomment the following directive to pull in TLS configuration parameters:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
Include conf/extra/httpd-ssl.conf}}<br />
<br />
Consult [https://ssl-config.mozilla.org/#server=apache&config=intermediate Mozilla's SSL configurator] for details about how to optimize your TLS configuration.<br />
<br />
Refer to the following two sample configuration files depending on how you want to access your Nextcloud installation:<br />
<br />
* In case your Nextcloud installation is accessed via a dedicated host name (e.g. <nowiki>https://cloud.example.com/</nowiki>) put [https://gist.github.com/wolegis/1659786ded9128935f638ee2bf228906 this] fragment into {{ic|/etc/httpd/conf/extra/httpd-vhosts.conf}}.<br />
<br />
* In case your Nextcloud installation is located in a subfolder of your web site (e.g. <nowiki>https://www.example.com/nextcloud/</nowiki>) put [https://gist.github.com/wolegis/002e198c2db7980a84fd8d160c2bdb9a this] fragment in your {{ic|/etc/httpd/conf/httpd.conf}}.<br />
<br />
Of course you must adapt these sample configuration files to your concrete setup. Replace the {{ic|SetHandler}} directive by {{ic|SetHandler "proxy:unix:/run/uwsgi/nextcloud.sock{{!}}uwsgi://nextcloud/"}} when you use ''uWSGI''.<br />
<br />
The Nextcloud package comes with a {{ic|.htaccess}} that already takes care of a lot of rewriting and header stuff. Run {{ic|occ maintenance:update:htaccess}} to adapt this file. Parameter {{ic|htaccess.RewriteBase}} in {{ic|/etc/webapps/nextcloud/config/config.php}} is vital for this.<br />
<br />
== Background jobs ==<br />
<br />
Nextcloud requires certain tasks to be run on a scheduled basis. See Nextcloud's [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html documentation] for some details. The easiest (and most reliable) way to set up these background jobs is to use the systemd service and timer units that are already installed by {{Pkg|nextcloud}}. The service unit needs some tweaking so that the job uses the correct PHP ini-file (and not the global {{ic|php.ini}}). Create a [[drop-in file]] (e.g. with [[systemctl edit]]) and add:<br />
<br />
{{hc|/etc/systemd/system/nextcloud-cron.service.d/override.conf|2=<br />
[Service]<br />
ExecStart=<br />
ExecStart=/usr/bin/php -c /etc/webapps/nextcloud/php.ini -f /usr/share/webapps/nextcloud/cron.php<br />
}}<br />
<br />
After that [[enable/start]] {{ic|nextcloud-cron.timer}} (not the service).<br />
<br />
== In-memory caching ==<br />
<br />
Nextcloud's [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/caching_configuration.html documentation] recommends to apply some kind of in-memory object cache to significantly improve performance. This section demonstrates setup of [https://pecl.php.net/package/APCu APCu] - mainly to pinpoint the details that differ from the instructions given in Nextcloud's documentation. The other options ([https://redis.io/ Redis] and [https://www.memcached.org/ memcached]) are also sufficiently covered there.<br />
<br />
Install {{Pkg|php-apcu}} (as dependency {{ic|--asdeps}}). Enable the extension in the relevant configuration files. These are<br />
<br />
* {{ic|/etc/webapps/nextcloud/php.ini}} used by the {{ic|occ}} command and the background jobs and<br />
* depending on the application server you use either<br />
** {{ic|/etc/uwsgi/nextcloud.ini}} in case you use ''uWSGI'' or<br />
** {{ic|/etc/php/php-fpm.d/nextcloud.conf}} in case you use ''php-fpm''.<br />
<br />
The parameter to do so is already there and only needs to be uncommented. Two other configuration parameters related to ''APCu'' are also already there.<br />
<br />
Restart your application server (not the web server as Nextcloud's documentation claims). Add the following line to your Nextcloud configuration file:<br />
<br />
{{hc|/etc/webapps/nextcloud/config/config.php|2='memcache.local' => '\OC\Memcache\APCu',}}<br />
<br />
That's it. Enjoy your performance boost!<br />
<br />
{{Note|Mind that [https://github.com/nextcloud/notify_push push notify] (the Nextcloud service that replaces client polling by notification by the server thus drastically reducing sync latency) is dependend on Redis.}}<br />
<br />
== Security Hardening ==<br />
<br />
See the [https://docs.nextcloud.com/server/latest/admin_manual/installation/harden_server.html Nextcloud documentation] and [[Security]]. Nextcloud additionally provides a [https://scan.nextcloud.com/ Security scanner].<br />
<br />
== Synchronization ==<br />
<br />
{{Tip|With all software automatically authenticating against your Nextcloud server it is recommended not to use your personal password but so-called app tokens. This way in case you suspect credentials in one of the programs may have leaked you just need to revoke this one affected app token (instead of having to change your personal password and propagating this change to all places where it has been stored). You can generate a new app token in Nextcloud's web GUI in ''Settings'' &rarr; ''Security'' in the ''Devices & Sessions'' section.}}<br />
<br />
=== Desktop ===<br />
<br />
The official client can be installed with the {{Pkg|nextcloud-client}} package. Alternative versions are available in the [[AUR]]: {{AUR|nextcloud-client-git}}. Please keep in mind that using {{Pkg|owncloud-client}} with Nextcloud is not supported.<br />
<br />
The desktop client basically syncs one or more directories of your desktop computer with corresponding folders in your Nextcloud's file service. It integrates nicely with your desktop's file manager (Dolphin in KDE Plasma, Nautilus in Gnome) displaying overlays representing synchronization and share status. The context menu of each file gets an additional entry ''Nextcloud'' to manage sharing of this file and getting the public or internal share link. Nextcloud's documentation has a [https://docs.nextcloud.com/desktop/ volume] exclusively about the desktop client.<br />
<br />
{{Expansion|As of 2022-02-06 (desktop client version 3.4.1) the following additional packages do not seem to be necessary any more. Please elaborate in case they still are.}}<br />
<br />
Additional packages are needed for some features:<br />
<br />
* '''Auto-login:''' All of them use {{Pkg|qtkeychain-qt5}} to store and retrieve account-specific access tokens. To achieve auto-login when the client starts, one of optional dependencies of ''qtkeychain'' should be installed as well. Moreover, if you choose {{Pkg|libsecret}} as the backend for ''qtkeychain'', a service that provides [https://archlinux.org/packages/?q=org.freedesktop.secrets org.freedesktop.secrets] should be running when the client starts.<br />
* '''File manager integration:''' for {{Pkg|nextcloud-client}}, integration with file managers (e.g., show Nextcloud folders in GTK+ file dialogs) requires another package {{Pkg|nextcloud-client-cloudproviders}}. {{Pkg|nextcloud-client}} already includes cloudproviders supports by default.<br />
<br />
=== Thunderbird ===<br />
<br />
Since version 91 [[Thunderbird]] fully supports CalDAV and CardDAV - even with auto detection (i.e. you do not have to provide long URLs to access your calendars and address books). Nextcloud's [https://docs.nextcloud.com/server/latest/user_manual/en/pim/sync_thunderbird.html documentation] is not up to date in this respect.<br />
<br />
==== Calendar ====<br />
<br />
There are a few ways how to start the ''new calendar'' wizard. One is via the main menu (&#x2630; at the very right) &rarr; &#x2795; ''New'' &rarr; ''Calendar&hellip;'' Choose ''Network'' and click ''Next''. On the next page enter your username (that on your Nextcloud server) and the top URL of your Nextcloud server (e.g. {{ic|https://cloud.mysite.org/}}). Click ''Search calendar''. Now provide your password (or even better an app token (see above)). Finally select the calendar(s) you want to see in Thunderbird and click ''Subscribe''. Be sure to mark read-only calendars (e.g. Nextcloud's birtday calendar) as read-only. Otherwise you will repeatably see reminders that you cannot effectively close.<br />
<br />
==== Contacts ====<br />
<br />
Open Thunderbird's address book - e.g. by ''Shift+Ctrl+B''. Choose ''File'' &rarr; ''New'' &rarr; ''CardDAV address book''. On the next page enter your username (that on your Nextcloud server) and the top URL of your Nextcloud server (e.g. {{ic|https://cloud.mysite.org/}}). Click ''Next''. Now provide your password (or even better an app token (see above)). Finally select the address book(s) you want to set in Thunderbird's address book window and click ''Next''.<br />
<br />
=== Mounting files with davfs2 ===<br />
<br />
If you want to mount your Nextcloud using WebDAV, install {{AUR|davfs2}} (as described in [[davfs2]]).<br />
<br />
To mount your Nextcloud, use:<br />
<br />
# mount -t davfs https://''your_domain''/nextcloud/remote.php/dav/files/''username''/ /path/to/mount<br />
<br />
You can also create an entry for this in {{ic|/etc/fstab}}<br />
<br />
{{hc|/etc/fstab|<br />
https://''your_domain''/nextcloud/remote.php/dav/files/''username''/ /path/to/mount davfs rw,user,noauto 0 0<br />
}}<br />
<br />
{{Tip|In order to allow automount you can also store your username (and password if you like) in a file as described in [[davfs2#Storing credentials]].}}<br />
<br />
{{Note|If creating/copying files is not possible, while the same operations work on directories, see [[davfs2#Creating/copying files not possible and/or freezes]].}}<br />
<br />
=== Mounting files in GNOME Files (Nautilus) ===<br />
<br />
You can access the files directly in Nautilus ('+ Other Locations') through WebDAV protocol - use the link as shown in your Nextcloud installation Web GUI (typically: <nowiki>https://example.org/remote.php/webdav/</nowiki>) but replace the protocol name from 'https' to 'davs'. Nautilus will ask for user name and password when trying to connect.<br />
<br />
=== Android ===<br />
<br />
Download the official Nextcloud app from [https://play.google.com/store/apps/details?id=com.nextcloud.client Google Play] or [https://f-droid.org/packages/com.nextcloud.client/ F-Droid].<br />
<br />
To enable contacts and calendar sync (Android 4+):<br />
# download [https://www.davx5.com/ DAVx<sup>5</sup>] ([https://play.google.com/store/apps/details?id=at.bitfire.davdroid Play Store], [https://f-droid.org/app/at.bitfire.davdroid F-Droid])<br />
# Enable mod_rewrite.so in httpd.conf<br />
# create a new DAVdroid account in the ''Account'' settings, and specify your "short" server address and login/password couple, e.g. {{ic|<nowiki>https://cloud.example.com</nowiki>}} (there is no need for the {{ic|<nowiki>/remote.php/{carddav,webdav}</nowiki>}} part if you configured your web server with the proper redirections, as illustrated previously in the article; ''DAVdroid'' will find itself the right URLs)<br />
<br />
=== iOS ===<br />
<br />
Download the official Nextcloud app from the [https://itunes.apple.com/us/app/nextcloud/id1125420102 App Store].<br />
<br />
== Tips and tricks ==<br />
<br />
=== Using the {{ic|occ}} command line tool ===<br />
<br />
A useful tool for server administration is {{ic|occ}}. Refer to Nextcloud's documentation for [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html details]. You can perform many common server operations with occ, such as managing users and configuring apps.<br />
<br />
A convenience wrapper around {{ic|/usr/share/webapps/nextcloud/occ}} is provided with {{ic|/usr/bin/occ}} which automatically runs as the default user (''nextcloud''), using the default [[PHP]] and PHP configuration file. The environment variables {{ic|NEXTCLOUD_USER}}, {{ic|NEXTCLOUD_PHP}} and {{ic|NEXTCLOUD_PHP_CONFIG}} can be used to specify a non-default user, PHP executable and PHP configuration file (respectively). Especially the latter (using {{ic|NEXTCLOUD_PHP_CONFIG}}) is necessary when Nextcloud was setup in a way as described in the sections [[#Configuration]] and [[#Application servers]], i.e. using PHP configurations specific to Nextcloud. In this case put {{ic|1=export NEXTCLOUD_PHP_CONFIG=/etc/webapps/nextcloud/php.ini}} in your {{ic|.bashrc}}.<br />
<br />
{{Warning| When using {{pkg|php-apcu}} for caching, make sure to set {{ic|1= apc.enable_cli=1}} in {{ic|/etc/webapps/nextcloud/php.ini}}, as the {{ic|occ}} command will otherwise run out of memory ({{Bug|69726}}).}}<br />
<br />
=== Pacman hook ===<br />
<br />
To automatically upgrade the Nextcloud database on package update, you can make use of the included [[pacman hook]]:<br />
<br />
# mkdir -vp /etc/pacman.d/hooks<br />
# ln -sv /usr/share/doc/nextcloud/nextcloud.hook /etc/pacman.d/hooks/<br />
<br />
{{Note| The packaged pacman hook implies, that the global {{ic|php.ini}} is used for the application.}}<br />
<br />
=== Running Nextcloud in a subdirectory ===<br />
<br />
The instructions in section [[#Web server|Web server]] will result in a setup where your Nextcloud installation is reachable via a dedicated server name, e.g. {{ic|cloud.mysite.com}}. If you would like to have Nextcloud located in a subdirectory. e.g. {{ic|www.mysite.com/nextcloud}}, then:<br />
<br />
* For nginx refer to the section in Nextcloud's documentation that explicitely covers this [https://docs.nextcloud.com/server/latest/admin_manual/installation/nginx.html#nextcloud-in-a-subdir-of-the-nginx-webroot topic].<br />
<br />
* For apache edit the {{ic|/etc/httpd/conf/extra/nextcloud.conf}} you included and comment out the {{ic|<nowiki><VirtualHost *:80> ... </VirtualHost></nowiki>}} part of the include file.<br />
<br />
{{Note| Do not forget to configure the {{ic |.well-known}} URLs for service discovery. For more information please see [https://docs.nextcloud.com/server/latest/admin_manual/issues/general_troubleshooting.html#service-discovery Service discovery] in Nextcloud's documentation.}}<br />
<br />
=== Docker ===<br />
<br />
See the [https://hub.docker.com/_/owncloud/ ownCloud] or [https://github.com/nextcloud/docker Nextcloud] repository for [[Docker]].<br />
<br />
=== Office integration ===<br />
<br />
There are currently three different solutions for office integration:<br />
<br />
* [https://www.collaboraoffice.com/collabora-online/ Collabora Online]<br />
* [https://www.onlyoffice.com/ ONLYOFFICE]<br />
* [https://docs.microsoft.com/en-us/officeonlineserver/office-online-server-overview MS Office Online Server]<br />
<br />
All three have in common that a dedicated server is required and your web server needs to be adapted to forward certain requests to the office service. The actual integration with Nextcloud is then accomplished by means of a Nextcloud app specific for one of the above products.<br />
<br />
Mind that all three products are aimed at businesses, i.e. you will have to pay for the office service. Only Collabora offers a developers plan ([https://www.collaboraoffice.com/code/ CODE]) for free. ONLYOFFICE offers a [https://www.onlyoffice.com/en/docs-enterprise-prices.aspx Home Server] plan for a reasonable price.<br />
<br />
For installation, setup instructions and integration with Nextcloud consult:<br />
<br />
* [https://nextcloud.com/collaboraonline/ Collabora online], [https://apps.nextcloud.com/apps/richdocuments app]<br />
* [https://api.onlyoffice.com/editors/nextcloud ONLYOFFICE], [https://apps.nextcloud.com/apps/onlyoffice app]<br />
* [https://github.com/nextcloud/officeonline MS Office Online Server], [https://apps.nextcloud.com/apps/officeonline app]<br />
<br />
=== Disabling app recommendations ===<br />
<br />
By default, nextcloud reccomends apps to new clients, which can result in a lot of notifications. To disable this, disable the recommendation app using {{ic|occ}}.<br />
<br />
=== Backup calendars and address books with calcardbackup ===<br />
<br />
The {{AUR|calcardbackup}} package can be installed and configured to provide regular backups of the calendar and/or address book databases. Edit {{ic|/etc/calcardbackup/calcardbackup.conf}} to your liking and then [[start]] and [[enable]] {{ic|calcardbackup.timer}}.<br />
<br />
== Troubleshooting ==<br />
<br />
By default, the logs of the web application are available in {{ic|/var/log/nextcloud/nextcloud.log}}.<br />
<br />
=== Issues with permissions and setup after upgrade to >= 21.0.0 ===<br />
<br />
{{Note| Before nextcloud 21.0.0, the web application was run using the {{ic|http}} user. This is a security concern in regards to cross-application access of this user (it has access to all data of all web applications).}}<br />
<br />
Since version 21.0.0 nextcloud more closely follows the [[web application package guidelines]]. This introduces the separate user {{ic|nextcloud}}, as which the web application is run.<br />
<br />
After an upgrade from nextcloud < 21.0.0 make sure that<br />
<br />
* the data directory is located at {{ic|/var/lib/nextcloud/data}}<br />
* the writable apps directory is located at {{ic|/var/lib/nextcloud/apps}}<br />
* both the data directory and the writable apps directory, alongside all files beneath them are writable and owned by the {{ic|nextcloud}} user<br />
* the web application configuration file resides in {{ic|/etc/webapps/nextcloud/config/}} and that that directory and its contents are writable and owned by the {{ic|nextcloud}} user<br />
* an application server, such as {{pkg|php-fpm}} or [[UWSGI]] is configured to run the web application as the {{ic|nextcloud}} user and not the {{ic|http}} user<br />
* update the cron job/systemd timer to run with the new user<br />
<br />
=== Login loop without any clue in access.log, error.log, nor nextcloud.log ===<br />
<br />
As mentioned in a [https://bbs.archlinux.org/viewtopic.php?pid=1967719#p1967719 post in the forum], this issue can be fixed by setting correct permissions on the ''sessions'' directory. (See [https://docs.nextcloud.com/server/stable/admin_manual/installation/nginx.html#login-loop-without-any-clue-in-access-log-error-log-nor-nextcloud-log Nextcloud's documentation] for details.) It is also possible that the ''sessions'' directory is missing altogether. The creation of this directory is documented in [[#System_and_environment|System and environment]].<br />
<br />
{{ic|/var/lib/nextcloud}} should look like this:<br />
<br />
drwxr-xr-x 6 nextcloud nextcloud 4096 17. Apr 00:56 ./<br />
drwxr-xr-x 21 root root 4096 17. Apr 00:53 ../<br />
drwxr-xr-x 2 nextcloud nextcloud 4096 16. Feb 00:21 apps/<br />
drwxrwx--- 10 nextcloud nextcloud 4096 16. Apr 13:46 data/<br />
drwx------ 2 nextcloud nextcloud 4096 17. Apr 01:04 sessions/<br />
<br />
=== Environment variables not available ===<br />
<br />
Depending on what application server you use custom environment variables can be provided to the Nextcloud's PHP code.<br />
<br />
'''php-fpm'''<br />
<br />
Add one or more lines in {{ic|/etc/php/php-fpm.d/nextcloud.conf}} as per [https://docs.nextcloud.com/server/latest/admin_manual/installation/source_installation.html#php-fpm-tips-label Nextcloud's documentation], e.g.:<br />
<br />
env[PATH] = /usr/local/bin:/usr/bin:/bin<br />
<br />
'''uwsgi'''<br />
<br />
Add one or more lines in {{ic|/etc/uwsgi/nextcloud.ini}}, e.g.:<br />
<br />
env = PATH=/usr/local/bin:/usr/bin:/bin<br />
<br />
Mind there must not be any blanks around the second {{ic|1==}}.<br />
<br />
=== Self-signed certificate not accepted ===<br />
<br />
{{Remove|Subsections below and including this one are considered outdated (e.g. mentioning of owncloud). In case you know a certain subsection still applies, please update it and move it above this marker. Any subsection not updated and moved above this marker by 2022-09-30 will be deleted.}}<br />
<br />
ownCloud uses [[Wikipedia:cURL]] and [[Wikipedia:SabreDAV]] to check if WebDAV is enabled.<br />
If you use SSL/TLS with a self-signed certificate, e.g. as shown in [[LAMP]], and access ownCloud's admin panel, you will see the following error message:<br />
<br />
Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken.<br />
<br />
Assuming that you followed the [[LAMP]] tutorial, execute the following steps:<br />
<br />
Create a local directory for non-distribution certificates and copy [[LAMP]]s certificate there. This will prevent {{ic|ca-certificates}}-updates from overwriting it.<br />
<br />
# cp /etc/httpd/conf/server.crt /usr/share/ca-certificates/''WWW.EXAMPLE.COM.crt''<br />
<br />
Add ''WWW.EXAMPLE.COM.crt'' to {{ic|/etc/ca-certificates.conf}}:<br />
<br />
''WWW.EXAMPLE.COM.crt''<br />
<br />
Now, regenerate your certificate store:<br />
<br />
# update-ca-certificates<br />
<br />
Restart the httpd service to activate your certificate.<br />
<br />
=== Self-signed certificate for Android devices ===<br />
<br />
Once you have followed the setup for SSL, as on [[Apache HTTP Server#TLS]] for example, early versions of DAVdroid will<br />
reject the connection because the certificate is not trusted. A certificate can be made as follows on your server:<br />
<br />
# openssl x509 -req -days 365 -in /etc/httpd/conf/server.csr -signkey /etc/httpd/conf/server.key -extfile android.txt -out CA.crt<br />
# openssl x509 -inform PEM -outform DER -in CA.crt -out CA.der.crt <br />
<br />
The file {{ic|android.txt}} should contain the following:<br />
<br />
basicConstraints=CA:true<br />
<br />
Then import {{ic|CA.der.crt}} to your Android device:<br />
<br />
Put the {{ic|CA.der.crt}} file onto the sdcard of your Android device (usually to the internal one, e.g. save from a mail attachment).<br />
It should be in the root directory. Go to ''Settings > Security > Credential storage'' and select ''Install from device storage''.<br />
The {{ic|.crt}} file will be detected and you will be prompted to enter a certificate name. After importing the certificate,<br />
you will find it in ''Settings > Security > Credential storage > Trusted credentials > User''.<br />
<br />
Thanks to: [https://web.archive.org/web/20150323082541/http://www.leftbrainthings.com/2013/10/13/creating-and-importing-self-signed-certificate-to-android-device/]<br />
<br />
Another way is to import the certificate directly from your server via [https://f-droid.org/en/packages/at.bitfire.cadroid/ CAdroid] and follow the instructions there.<br />
<br />
=== CSync failed to find a specific file. ===<br />
<br />
This is most likely a certificate issue. Recreate it, and do not leave the common name empty or you will see the error again.<br />
<br />
# openssl req -new -x509 -nodes -newkey rsa:4096 -keyout server.key -out server.crt<br />
<br />
=== Seeing white page after login ===<br />
<br />
The cause is probably a new app that you installed. To fix that, you can use the occ command as described<br />
[https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html here]. So with<br />
<br />
# sudo -u http php /usr/share/webapps/nextcloud/occ app:list<br />
<br />
you can list all apps (if you installed nextcloud in the standard directory), and with <br />
<br />
# sudo -u http php /usr/share/webapps/nextcloud/occ app:disable <nameOfExtension><br />
<br />
you can disable the troubling app.<br />
<br />
Alternatively, you can either use [[phpMyAdmin]] to edit the {{ic|oc_appconfig}} table (if you got lucky and the table has an edit option), or do it by hand with mysql:<br />
<br />
# mysql -u root -p owncloud<br />
MariaDB [owncloud]> '''delete from''' oc_appconfig '''where''' appid='<nameOfExtension>' '''and''' configkey='enabled' '''and''' configvalue='yes';<br />
MariaDB [owncloud]> '''insert into''' oc_appconfig (appid,configkey,configvalue) '''values''' ('<nameOfExtension>','enabled','no');<br />
<br />
This should delete the relevant configuration from the table and add it again.<br />
<br />
=== GUI sync client fails to connect ===<br />
<br />
If using HTTP basic authentication, make sure to exclude "status.php", which must be publicly accessible. [https://github.com/owncloud/mirall/issues/734]<br />
<br />
=== GUI tray icon disappears, but client still running in the background ===<br />
<br />
After waking up from a suspended state, the Nextcloud client tray icon may disappear from the system tray. A workaround is to delay the startup of the client, as noted [https://github.com/nextcloud/desktop/issues/203#issuecomment-463957811 here]. This can be done with the .desktop file, for example:<br />
<br />
{{hc|.local/share/applications/nextcloud.desktop|<nowiki><br />
...<br />
Exec=bash -c 'sleep 5 && nextcloud'<br />
...<br />
</nowiki>}}<br />
<br />
=== Some files upload, but give an error 'Integrity constraint violation...' ===<br />
<br />
You may see the following error in the ownCloud sync client:<br />
<br />
SQLSTATE[23000]: Integrity constraint violation: ... Duplicate entry '...' for key 'fs_storage_path_hash')...<br />
<br />
This is caused by an issue with the File Locking app, which is often not sufficient to keep conflicts from occurring on some webserver configurations.<br />
A more complete [https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/files_locking_transactional.html Transactional File Locking]<br />
is available that rids these errors, but you must be using the Redis php-caching method. Install {{Pkg|redis}} and {{Pkg|php-redis}}, comment out<br />
your current php-cache mechanism, and then in {{ic|/etc/php/conf.d/redis.ini}} uncomment {{ic|1=extension=redis}}.<br />
Then in {{ic|config.php}} make the following changes:<br />
<br />
'memcache.local' => '\OC\Memcache\Redis',<br />
'filelocking.enabled' => 'true',<br />
'memcache.locking' => '\OC\Memcache\Redis',<br />
'redis' => array(<br />
'host' => 'localhost',<br />
'port' => 6379,<br />
'timeout' => 0.0,<br />
),<br />
<br />
and [[start/enable]] {{ic|redis.service}}.<br />
<br />
Finally, disable the File Locking App, as the Transational File Locking will take care of it (and would conflict).<br />
<br />
If everything is working, you should see 'Transactional File Locking Enabled' under Server Status on the Admin page, and syncs should no longer cause issues.<br />
<br />
=== "Cannot write into apps directory" ===<br />
<br />
As mentioned in the [https://docs.nextcloud.com/server/latest/admin_manual/apps_management.html official admin manual],<br />
either you need an apps directory that is writable by the http user, or you need to set {{ic|appstoreenabled}} to {{ic|false}}. <br />
<br />
If you have set {{ic|open_basedir}} in your PHP/web server configuration file (e.g. {{ic|/etc/httpd/conf/extra/nextcloud.conf}}), it may be necessary to add your ''/path/to/data'' directory to the string on the line starting with {{ic|php_admin_value open_basedir }}:<br />
<br />
{{hc|/etc/httpd/conf/extra/nextcloud.conf|2=<br />
<br />
php_admin_value open_basedir "''/path/to/data/'':/srv/http/:/dev/urandom:/tmp/:/usr/share/pear/:/usr/share/webapps/nextcloud/:/etc/webapps/nextcloud"<br />
}}<br />
<br />
=== Installed apps get blocked because of MIME type error ===<br />
<br />
If you are putting your apps folder outside of the nextcloud installation directory make sure your webserver serves it properly.<br />
<br />
In nginx this is accomplished by adding a location block to the nginx configuration as the folder will not be included in it by default.<br />
<br />
location ~ /apps2/(.*)$ {<br />
alias /var/www/nextcloud/apps/$1;<br />
}<br />
<br />
=== CSS and JS resources blocked due to MIME type error ===<br />
<br />
If you load your Nextcloud web gui and it is missing styles etc. check the browser's console logs for lines like:<br />
<br />
<nowiki>The resource from “https://example.com/core/css/guest.css?v=72c34c37-0” was blocked due to MIME type (“text/plain”) mismatch (X-Content-Type-Options: nosniff).</nowiki><br />
<br />
There are a few possible reasons, possibly you have [https://docs.nextcloud.com/server/latest/admin_manual/installation/nginx.html#javascript-js-or-css-css-files-not-served-properly not included any mime types] in your {{ic|nginx.conf}} add the following to {{ic|nginx.conf}}<br />
<br />
types_hash_max_size 2048;<br />
types_hash_bucket_size 128;<br />
include mime.types;<br />
<br />
Here we use the {{ic|mime.types}} provided by {{Pkg|mailcap}}, due to the large number of types included we increase the allowed size of the types hash.<br />
<br />
Other possible reasons for these errors are missing permissions on the files. Make sure the files are owned by {{ic|http:http}} and can be read and written to by this user.<br />
<br />
=== Security warnings even though the recommended settings have been included in nginx.conf ===<br />
<br />
At the top of the admin page there might be a warning to set the {{ic|Strict-Transport-Security}}, {{ic|X-Content-Type-Options}},<br />
{{ic|X-Frame-Options}}, {{ic|X-XSS-Protection}} and {{ic|X-Robots-Tag}} according to https://docs.nextcloud.com/server/latest/admin_manual/installation/harden_server.html even though they are already set like that.<br />
<br />
A possible cause could be that because owncloud sets those settings, uwsgi passed them along and nginx added them again:<br />
<br />
{{hc|$ curl -I https://domain.tld|<nowiki><br />
...<br />
X-XSS-Protection: 1; mode=block<br />
X-Content-Type-Options: nosniff<br />
X-Frame-Options: Sameorigin<br />
X-Robots-Tag: none<br />
Strict-Transport-Security: max-age=15768000; includeSubDomains; preload;<br />
X-Content-Type-Options: nosniff<br />
X-Frame-Options: SAMEORIGIN<br />
X-XSS-Protection: 1; mode=block<br />
X-Robots-Tag: none<br />
</nowiki>}}<br />
<br />
While the fast_cgi sample configuration has a parameter to avoid that ( {{ic|fastcgi_param modHeadersAvailable true; #Avoid sending the security headers twice}} ), when using uwsgi and nginx the following modification of the uwsgi part in nginx.conf could help:<br />
<br />
{{hc| /etc/nginx/nginx.conf|<nowiki><br />
...<br />
# pass all .php or .php/path urls to uWSGI<br />
location ~ ^(.+\.php)(.*)$ {<br />
include uwsgi_params;<br />
uwsgi_modifier1 14;<br />
# hode following headers received from uwsgi, because otherwise we would send them twice since we already add them in nginx itself<br />
uwsgi_hide_header X-Frame-Options;<br />
uwsgi_hide_header X-XSS-Protection;<br />
uwsgi_hide_header X-Content-Type-Options;<br />
uwsgi_hide_header X-Robots-Tag;<br />
uwsgi_hide_header X-Frame-Options;<br />
#Uncomment line below if you get connection refused error. Remember to commet out line with "uwsgi_pass 127.0.0.1:3001;" below<br />
uwsgi_pass unix:/run/uwsgi/owncloud.sock;<br />
#uwsgi_pass 127.0.0.1:3001;<br />
}<br />
...<br />
</nowiki>}}<br />
<br />
=== "Reading from keychain failed with error: 'No keychain service available'" ===<br />
<br />
Can be fixed for Gnome by installing the following 2 packages, {{Pkg|libgnome-keyring}} and {{Pkg|gnome-keyring}}.<br />
Or the following for KDE, {{Pkg|libgnome-keyring}} and {{Pkg|qtkeychain-qt5}}.<br />
<br />
=== FolderSync: "Method Not Allowed" ===<br />
<br />
FolderSync needs access to {{ic|/owncloud/remote.php/webdav}}, so you could create another alias for owncloud in your {{ic|/etc/httpd/conf/extra/nextcloud.conf}}<br />
<br />
<IfModule mod_alias.c><br />
Alias /nextcloud /usr/share/webapps/nextcloud/<br />
Alias /owncloud /usr/share/webapps/nextcloud/<br />
</IfModule><br />
<br />
=== Log file spam ===<br />
<br />
{{Accuracy|This section was added 2022-03-08 without referencing an upstream bug, while it might still be relevant, we have no way to know if this issue has been fixed until a link to the issue is provided.}}<br />
<br />
The cause could be a too high PHP version. Until this is fixed, the log level in nextcloud's config.php can be adjusted.<br />
<br />
== See also ==<br />
<br />
* [https://docs.nextcloud.com/ Nextcloud Documentation Overview]<br />
* [https://docs.nextcloud.com/server/latest/admin_manual/ Nextcloud Admin Manual]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Nextcloud&diff=727815Nextcloud2022-04-28T12:57:46Z<p>Veox: NC 24 Final release moved to 2022-05-03.</p>
<hr />
<div>[[Category:File sharing]]<br />
[[Category:Web applications]]<br />
[[ja:Nextcloud]]<br />
{{Related articles start}}<br />
{{Related|Apache HTTP Server}}<br />
{{Related|Nginx}}<br />
{{Related|UWSGI}}<br />
{{Related|OpenSSL}}<br />
{{Related|WebDAV}}<br />
{{Related articles end}}<br />
<br />
From [[Wikipedia:Nextcloud]]:<br />
<br />
:Nextcloud is a suite of client-server software for creating and using file hosting services. It is functionally similar to Dropbox, although Nextcloud is free and open-source, allowing anyone to install and operate it on a private server. In contrast to proprietary services like Dropbox, the open architecture allows adding additional functionality to the server in form of applications.<br />
<br />
Nextcloud is a fork of ownCloud. For differences between the two, see [[Wikipedia:Nextcloud#Differences from ownCloud]].<br />
<br />
{{Note|For version 23 Nextcloud's PHP code has been extensively patched for the Arch Linux package to achieve PHP 8.1 compatibility. The downside of this is that Nextcloud's built-in code integrity check fails for all patched files. The corresponding warnings in the admin's view can be ignored. Version 24 is [https://github.com/nextcloud/server/issues/29287#issuecomment-1019277996 expected to be compatible] with PHP 8.1 and is scheduled for [https://github.com/nextcloud/server/wiki/Maintenance-and-Release-Schedule#nextcloud-24 early May].}}<br />
<br />
== Setup overview ==<br />
<br />
A complete installation of Nextcloud comprises (at least) the following components:<br />
<br />
'''A web server''' paired with '''an application server''' on which runs '''Nextcloud''' (i.e. the PHP code) using '''a database'''. <br />
<br />
This article will cover [[MariaDB]]/MySQL and [[PostgreSQL]] as databases and the following combinations of web server and application server:<br />
<br />
* nginx &rarr; uWSGI (plus uwsgi-plugin-php)<br />
* nginx &rarr; php-fpm,<br />
* Apache (using mod_proxy_uwsgi) &rarr; uWSGI (plus uwsgi-plugin-php)<br />
* Apache (using mod_proxy_fcgi) &rarr; php-fpm<br />
<br />
The Nextcloud package complies with the [[web application package guidelines]]. Among other things this mandates that the web application to be run with a dedicated user - in this case {{ic|nextcloud}}. This is one of the reasons why the application server comes into play here. For the very same reason it is not possible anymore to execute Nextcloud's PHP code directly in the Apache process by means of {{pkg|php-apache}}.<br />
<br />
== Installation ==<br />
<br />
Install the {{Pkg|nextcloud}} package. This will pull in quite a few dependent packages. All [https://docs.nextcloud.com/server/stable/admin_manual/installation/source_installation.html#prerequisites-for-manual-installation required PHP extensions] will be taken care of this way. Additionally install the recommended packages {{Pkg|php-imagick}} for preview generation and {{Pkg|php-intl}} for increased translation performance and fixed sorting (preferrably as dependent package with pacman option {{ic|--asdeps}}). Other optional dependencies will be covered later depending on your concrete setup (e.g. which database you choose).<br />
<br />
== Configuration ==<br />
<br />
=== PHP ===<br />
<br />
This guide does not tamper with PHP's central configuration file {{ic|/etc/php/php.ini}} but instead puts Nextcloud specific PHP configuration in places where it does not potentially interfere with settings for other PHP based applications. These places are:<br />
<br />
* A dedicated copy of {{ic|php.ini}} in {{ic|/etc/webapps/nextcloud/php.ini}} (for the {{ic|occ}} command line tool and the background job).<br />
* Corresponding settings in the configuration of the application server. These will be covered in the section about application servers.<br />
<br />
Make a copy of {{ic|/etc/php/php.ini}} in {{ic|/etc/webapps/nextcloud}}. Although not strictly necessary change ownership of the copy:<br />
<br />
{{bc|chown nextcloud:nextcloud /etc/webapps/nextcloud/php.ini}}<br />
<br />
Most of the prerequisites listed in Nextcloud's [https://docs.nextcloud.com/server/stable/admin_manual/installation/source_installation.html#prerequisites-for-manual-installation installation instructions] are already enabled in a bare PHP installation. Additionally enable the following extensions:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
extension=bcmath<br />
extension=bz2<br />
extension=exif<br />
extension=gd<br />
extension=iconv<br />
; in case you installed php-imagick (as recommended)<br />
extension=imagick<br />
; in case you also installed php-intl (as recommended)<br />
extension=intl<br />
}}<br />
<br />
Set {{ic|date.timezone}} to your preferred timezone, e.g.:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
date.timezone = Europe/Berlin<br />
}}<br />
<br />
Raise PHP's memory limit to at least 512MiB:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
memory_limit = 512M<br />
}}<br />
<br />
Optional: For additional security configure {{ic|open_basedir}}. This limits the locations where Nextcloud's PHP code can read and write files. Proven settings are<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
open_basedir=/var/lib/nextcloud/data:/var/lib/nextcloud/apps:/tmp:/usr/share/webapps/nextcloud:/etc/webapps/nextcloud:/dev/urandom:/usr/lib/php/modules:/var/log/nextcloud:/proc/meminfo<br />
}}<br />
<br />
Depending on which additional extensions you configure you may need to extend this list, e.g. {{ic|/run/redis}} in case you opt for [[Redis]].<br />
<br />
It is not necessary to configure opcache here as this {{ic|php.ini}} is only used by the {{ic|occ}} command line tool and the background job, i.e. by short running PHP processes.<br />
<br />
=== Nextcloud ===<br />
<br />
Add the following entries to Nextcloud's configuration file:<br />
<br />
{{hc|/etc/webapps/nextcloud/config/config.php|2=<nowiki><br />
'trusted_domains' =><br />
array (<br />
0 => 'localhost',<br />
1 => 'cloud.example.org',<br />
), <br />
'overwrite.cli.url' => 'https://cloud.example.org/',<br />
'htaccess.RewriteBase' => '/',<br />
</nowiki>}}<br />
<br />
Adapt the given example hostname {{ic|cloud.example.com}}. In case your Nextcloud installation will be reachable via a subfolder (e.g. {{ic|<nowiki>https://www.example.com/nextcloud</nowiki>}}) {{ic|overwrite.cli.url}} and {{ic|htaccess.RewriteBase}} have to be modified accordingly.<br />
<br />
=== System and environment ===<br />
<br />
To make sure the Nextcloud specific {{ic|php.ini}} is used by the {{ic|occ}} tool set the environment variable {{ic|NEXTCLOUD_PHP_CONFIG}}:<br />
<br />
{{bc|1=<br />
export NEXTCLOUD_PHP_CONFIG=/etc/webapps/nextcloud/php.ini<br />
}}<br />
<br />
Also add this line to your {{ic|.bashrc}} to make this setting permanent.<br />
<br />
As a privacy and security precaution create the dedicated directory for session data:<br />
<br />
{{bc|1=<br />
install --owner=nextcloud --group=nextcloud --mode=700 -d /var/lib/nextcloud/sessions<br />
}}<br />
<br />
== Database ==<br />
<br />
[[MariaDB]]/MySQL is the canonical choice for Nextcloud.<br />
<br />
:The MySQL or MariaDB databases are the recommended database engines.[https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/linux_database_configuration.html]<br />
<br />
Most information concerning databases with Nextcloud deals with MariaDB/MySQL. The Nextcloud developers admit to have [https://github.com/nextcloud/server/issues/5912#issuecomment-318568370| less detailed expertise] with other databases. <br />
<br />
[[PostgreSQL]] is said to deliver better performance and overall has fewer quirks compared to MariaDB/MySQL. [[SQLite]] is mainly supported for test / development installations and not recommended for production. The [https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/linux_database_configuration.html list of supported databases] also contains [[Oracle Database]]. This product will not be covered here.<br />
<br />
=== MariaDB / MySQL ===<br />
<br />
Since [[MariaDB]] has been the default MySQL implementation in Arch Linux since 2013[https://archlinux.org/news/mariadb-replaces-mysql-in-repositories/] this text only mentions MariaDB.<br />
<br />
In case you want to run your database on the same host as Nextcloud install {{Pkg|mariadb}} (if you have not done so already). See the corresponding [[MariaDB|article]] for details. Do not forget to initialize MariaDB with {{ic|mariadb-install-db}}. It is recommended for additional security to configure MariaDB to [[MariaDB#Enable_access_locally_only_via_Unix_sockets|only listen on a local Unix socket]]:<br />
<br />
{{hc|/etc/my.cnf.d/server.cnf|2=<br />
[mysqld]<br />
skip_networking<br />
}}<br />
<br />
{{Note|Surprisingly Nextcloud is not compatible with MariaDB version 10.6 or higher (see {{Bug|71549}}). This is due to MariaDB forcing read-only for compressed InnoDB tables[https://mariadb.com/kb/en/innodb-compressed-row-format/#read-only] and Nextcloud using these kind of tables:<br />
<br />
:From MariaDB 10.6.0, tables that are of the {{ic|COMPRESSED}} row format are read-only by default. This is the first step towards removing write support and deprecating the feature.<br />
<br />
Upstream is aware of this [https://github.com/nextcloud/server/issues/25436 problem] but a [https://github.com/nextcloud/server/issues/25436#issuecomment-883213001 quick fix seems unlikely].<br />
<br />
One easy remedy for this issue is to allow write access to compressed InnoDB tables again by means of MariaDB's system variable [https://mariadb.com/docs/reference/mdb/system-variables/innodb_read_only_compressed/ innodb_read_only_compressed]. Just add the following section to your configuration of MariaDB:<br />
<br />
{{hc|/etc/my.cnf.d/server.cnf|2=<br />
[mariadb-10.6]<br />
innodb_read_only_compressed=OFF<br />
}}<br />
}}<br />
<br />
Nextcloud's own documentation [https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/linux_database_configuration.html#database-read-committed-transaction-isolation-level recommends] to set the transaction isolation level to READ-COMMITTED. This is especially important when you expect high load with many concurrent transactions.<br />
<br />
{{hc|/etc/my.cnf.d/server.cnf|2=<br />
[mysqld]<br />
transaction_isolation=READ-COMMITTED}}<br />
<br />
The other recommendation to set {{ic|1=binlog_format=ROW}} is obsolete. The default {{ic|MIXED}} in recent MariaDB versions is at least as good as the recommended {{ic|ROW}}. In any case the setting is only relevant when replication is applied.<br />
<br />
Start the CLI tool {{ic|mysql}} with database user root. (Default password is empty, but hopefully you change it as soon as possible.)<br />
<br />
{{bc|mysql -u root -p}}<br />
<br />
Create the user and database for Nextcloud with <br />
<br />
{{bc|<br />
CREATE USER 'nextcloud'@'localhost' IDENTIFIED BY 'xxxxxxxx';<br />
CREATE DATABASE IF NOT EXISTS nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;<br />
GRANT ALL PRIVILEGES on nextcloud.* to 'nextcloud'@'localhost';<br />
FLUSH privileges;}}<br />
<br />
({{ic|XXXXXXXX}} is a placeholder for the actual password of DB user ''nextcloud'' you must choose.) Quit the tool with {{ic|\q}}.<br />
<br />
{{Note|MariaDB has a flawed understanding of what UTF8 means resulting in the inability to store any characters with codepoints 0x10000 and above (e.g. emojis). They 'fixed' this issue with version 5.5 by introducing a new encoding called ''utf8mb4''. Bottom line: Never ever use MariaDB's ''utf8'', always use ''utf8mb4''. In case you need to migrate see [https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/mysql_4byte_support.html].}}<br />
<br />
So that you have decided to use MariaDB as the database of your Nextcloud installation you have to enable the corresponding PHP extension:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
extension=pdo_mysql<br />
}}<br />
<br />
Further configuration (related to MariaDB) is not required (contrary to the information given in Nextcloud's [https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/linux_database_configuration.html#configuring-a-mysql-or-mariadb-database admin manual]).<br />
<br />
Now setup Nextcloud's database schema with:<br />
<br />
{{bc|1=<br />
occ maintenance:install \<br />
--database=mysql \<br />
--database-name=nextcloud \<br />
--database-host=localhost:/run/mysqld/mysqld.sock \<br />
--database-user=nextcloud \<br />
--database-pass=xxxxxxxx \<br />
--admin-pass=zzzzzzzz \<br />
--admin-email=aaaa@bbbbb \<br />
--data-dir=/var/lib/nextcloud/data<br />
}}<br />
<br />
Mind the placeholders (e.g. {{ic|xxxxxxxx}}) and replace them with appropriate values. This command assumes that you run your database on the same host as Nextcloud. Enter {{ic|occ help maintenance:install }} and see Nextcloud's [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html#command-line-installation documentation] for other options.<br />
<br />
=== PostgreSQL ===<br />
<br />
Consult the corresponding [[PostgreSQL|article]] for detailed information about PostgreSQL. In case you want to run your database on the same host as Nextcloud install {{Pkg|postgresql}} (if you have not done so already). For additional security in this scenario it is recommended to configure PostgreSQL to [[PostgreSQL#Configure_PostgreSQL_to_be_accessible_exclusively_through_UNIX_Sockets|only listen on a local UNIX socket]]:<br />
<br />
{{hc|/var/lib/postgres/data/postgresql.conf|2=<br />
listen_addresses = <nowiki>''</nowiki><br />
}}<br />
<br />
Especially do not forget to initialize your database with {{ic|initdb}}. After having done so start PostgreSQL's CLI tool {{ic|psql}}<br />
<br />
{{bc|<br />
runuser -u postgres -- psql<br />
}}<br />
<br />
and create the database user {{ic|nextcloud}} and the database of the same name<br />
<br />
{{bc|1=<br />
CREATE USER nextcloud WITH PASSWORD 'xxxxxxxx';<br />
CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';<br />
ALTER DATABASE nextcloud OWNER TO nextcloud;<br />
GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;<br />
\q<br />
}}<br />
<br />
({{ic|xxxxxxxx}} is a placeholder for the passwort of database user ''nextcloud'' that you have to choose.)<br />
<br />
Install the additional package {{Pkg|php-pgsql}} as dependency (pacman option {{ic|--asdeps}}) and enable the corresponding PHP extension in {{ic|/etc/webapps/nextcloud/php.ini}}:<br />
<br />
{{hc|/etc/webapps/nextcloud/php.ini|2=<br />
extension=pdo_pgsql<br />
}}<br />
<br />
Now setup Nextcloud's database schema with:<br />
<br />
{{bc|1=<br />
occ maintenance:install \<br />
--database=pgsql \<br />
--database-name=nextcloud \<br />
--database-host=/run/postgresql \<br />
--database-user=nextcloud \<br />
--database-pass=xxxxxxxx \<br />
--admin-pass=zzzzzzzz \<br />
--admin-email=aaaa@bbbbb \<br />
--data-dir=/var/lib/nextcloud/data<br />
}}<br />
<br />
Mind the placeholders (e.g. {{ic|xxxxxxxx}}) and replace them with appropriate values. This command assumes that you run your database on the same host as Nextcloud. Enter {{ic|occ help maintenance:install }} and see Nextcloud's [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html#command-line-installation documentation] for other options.<br />
<br />
== Application server ==<br />
<br />
There are two prevalent application servers that can be used to process PHP code: [[uWSGI]] or [https://cwiki.apache.org/confluence/display/HTTPD/PHP-FPM php-fpm]. ''php-fpm'' as the name suggests is specialized on PHP. The protocol used between the web server and ''php-fpm'' is ''fastcgi''. The tool's [https://www.php.net/manual/en/install.fpm.php documentation] leaves room for improvement. ''uWSGI'' on the other hand can serve code written in a [https://uwsgi-docs.readthedocs.io/en/latest/LanguagesAndPlatforms.html handful of languages] by means of language specific plugins. The protocol used is ''uwsgi'' (lowercase). The tool is [https://uwsgi-docs.readthedocs.io/en/latest/index.html extensively documented] - albeit the sheer amount of documentation can become confusing and unwieldy.<br />
<br />
{{Warning|It has to be mentioned that maintenance of uWSGI and especially of its PHP plugin has been sparse lately[https://github.com/unbit/uwsgi/issues/2287]. This has already caused [https://bugs.archlinux.org/task/73470 issues] that could only be solved by patching uWSGI code by the maintainers of Arch Linux packages, i.e. not upstream.}}<br />
<br />
=== uWSGI ===<br />
<br />
uWSGI has its own [[uWSGI|article]]. A lot of useful information can be found there. Install {{pkg|uwsgi}} and the plugin {{pkg|uwsgi-plugin-php}} - preferrably as dependencies, i.e. with {{ic|--asdeps}}. To run Nextcloud's code with (or in) uWSGI you have to configure one uWSGI specific configuration file ({{ic|nextcloud.ini}}) and define one systemd service.<br />
<br />
'''nextcloud.ini'''<br />
<br />
The {{Pkg|nextcloud}} package includes a sample configuration file already in the right place {{ic|/etc/uwsgi/nextcloud.ini}}. In almost any case you will have to adapt this file to your requirements and setup. Find a [https://gist.githubusercontent.com/wolegis/fc0c01882b694777a6565aa1d0a4da47 version with lots of commented changes] (compared to the package's version). It assumes a no-frills Nextcloud installation for private use (i.e. with moderate load).<br />
<br />
See section [[#Background jobs|Background jobs]] for arguments why not to configure recurring jobs in this file. In general keep the enabled extensions, extension specific settings and {{ic|open_basedir}} in sync with {{ic|/etc/webapps/nextcloud/php.ini}} (with the exception of opcache).<br />
<br />
{{Tip|The changes to {{ic|/etc/uwsgi/nextcloud.ini}} can become extensive. A file named {{ic|nextcloud.ini.pacnew}} will be created during package update in case there are changes in the original file provided by the package {{Pkg|nextcloud}}. In order to better track changes in this latter file and apply them to {{ic|/etc/uwsgi/nextcloud.ini}} the following approach can be applied:<br />
<br />
* Make a copy of the file as provided by the package (e.g. by extracting from the package) and store it as {{ic|nextcloud.ini.package}}.<br />
* In case an update of package {{Pkg|nextcloud}} produces a {{ic|nextcloud.ini.pacnew}} you can identify the changes with {{ic|diff nextcloud.ini.package nextcloud.ini.pacnew}}.<br />
* Selectively apply the changes to your {{ic|nextcloud.ini}} depending on whether they make sense with your version or not.<br />
* Move {{ic|nextcloud.ini.pacnew}} over {{ic|nextcloud.ini.package}}.<br />
}}<br />
<br />
'''Enable and start'''<br />
<br />
The package {{pkg|uwsgi}} provides a template unit file ({{ic|uwsgi@.service}}). The instance ID (here ''nextcloud'') is used to pick up the right configuration file. [[Enable/start]] {{ic|uwsgi@nextcloud.service}}. <br />
<br />
In case you have more than a few (e.g. 2) services started like this and get the impression this is a waste of resource you might consider using [https://uwsgi-docs.readthedocs.io/en/latest/Emperor.html emperor mode].<br />
<br />
=== php-fpm ===<br />
<br />
In case you opt to use ''php-fpm'' as your application server install {{pkg|php-fpm}} - preferrably as a dependent package ({{ic|--asdeps}}).<br />
<br />
Configuration consists of a copy of {{ic|php.ini}} relevant for all applications served by ''php-fpm'' and a so-called pool file specific for the application (here Nextcloud). Finally you have to tweak the systemd service file.<br />
<br />
'''php-fpm.ini'''<br />
<br />
As stated earlier this article avoids modifications of PHP's central configuration in {{ic|/etc/php/php.ini}}. Instead create a ''php-fpm'' specific copy.<br />
<br />
{{bc|cp /etc/php/php.ini /etc/php/php-fpm.ini}}<br />
<br />
Make sure it is owned and only writeable by root ({{ic|-rw-r--r-- 1 root root ... php-fpm.ini}}). Enable the op-cache, i.e. uncomment the line<br />
<br />
{{bc|1=;zend_extension=opcache}}<br />
<br />
and put the following parameters below the existing line {{ic|[opcache]}}:<br />
<br />
{{hc|/etc/php/php-fpm.ini|2=opcache.enable = 1<br />
opcache.interned_strings_buffer = 8<br />
opcache.max_accelerated_files = 10000<br />
opcache.memory_consumption = 128<br />
opcache.save_comments = 1<br />
opcache.revalidate_freq = 1}}<br />
<br />
{{Warning|Do not try to put these settings in the pool file by means of {{ic|php_value[...]}} and {{ic|php_flag[...]}}. Your ''php-fpm'' processes will consistently crash with the very first request.}}<br />
<br />
'''nextcloud.conf pool file'''<br />
<br />
Next you have to create a so called pool file for ''php-fpm''. It is responsible for spawning dedicated ''php-fpm'' processes for the Nextcloud application. Create a file {{ic|/etc/php/php-fpm.d/nextcloud.conf}} - you may use this [https://gist.githubusercontent.com/wolegis/0d9c83acd0c8bf83bcfb3983931bc364 functional version] as a starting point.<br />
<br />
Again make sure this pool file is owned and only writeable by root (i.e. {{ic|-rw-r--r-- 1 root root ... nextcloud.conf}}). Adapt or add settings (especially {{ic|pm...}}, {{ic|php_value[...]}} and {{ic|php_flag[...]}}) to your liking. The settings {{ic|php_value[...]}} and {{ic|php_flag[..]}} must be consistent with the corresponding settings in {{ic|/etc/webapps/nextcloud/php.ini}} (but not {{ic|/etc/php/php-fpm.ini}}).<br />
<br />
The settings done by means of {{ic|php_value[...]}} and {{ic|php_flag[...]}} could instead be specified in {{ic|php-fpm.ini}}. But mind that settings in {{ic|php-fpm.ini}} apply for all applications served by ''php-fpm''.<br />
<br />
{{Tip|The package {{pkg|php-fpm}} comes with its own pool file {{ic|www.conf}} that is of little use here. A good approach to get rid of it is to rename it to {{ic|www.conf.package}} and create a file {{ic|www.conf}} with only comment lines (lines starting with a semicolon). This way {{ic|www.conf}} becomes a no-op. It is also not overwritten during installation of a new version of {{pkg|php-fpm}}. Instead a file {{ic|www.conf.pacnew}} is created. You can compare this against {{ic|www.conf.package}} to see if anything significant has changed in the pool file that you may have to reproduce in {{ic|nextcloud.conf}}. Do not forget to rename {{ic|www.conf.pacnew}} to {{ic|www.conf.package}} at the end of this procedure.}}<br />
<br />
'''php-fpm service'''<br />
<br />
''php-fpm'' is (of course) run as a systemd service. You have to modify the service configuration to be able to run Nextcloud. This is best achieved by means of a [[drop-in file]] and add:<br />
<br />
{{hc|/etc/systemd/system/php-fpm.service.d/override.conf|2=<br />
[Service]<br />
ExecStart=<br />
ExecStart=/usr/bin/php-fpm --nodaemonize --fpm-config /etc/php/php-fpm.conf --php-ini /etc/php/php-fpm.ini<br />
ReadWritePaths=/var/lib/nextcloud<br />
ReadWritePaths=/etc/webapps/nextcloud/config<br />
}}<br />
<br />
* It replaces the {{ic|ExecStart}} line by a start command that uses the {{ic|php-fpm.ini}} covered in the previous section.<br />
* The directories {{ic|/var/lib/nextcloud}} and {{ic|/etc/webapps/nextcloud/config}} (and everything below) are made writable. The {{ic|1=ProtectSystem=full}} in the original service definition causes {{ic|/usr}}, {{ic|/boot}} and {{ic|/etc}} to be mounted read-only for the ''php-fpm'' processes.<br />
<br />
'''Enable and start'''<br />
<br />
Do not forget to [[enable]] and [[start]] the ''php-fpm'' service.<br />
<br />
'''Keep /etc tidy'''<br />
<br />
The Nextcloud package unconditionally creates the uWSGI configuration file {{ic|/etc/uwsgi/nextcloud.ini}}. Of course it is of no use when you run ''php-fpm'' instead of ''uWSGI'' (and it does no harm whatsoever). In case you nevertheless want to get rid of it just add the following lines to {{ic|/etc/pacman.conf}}<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
# uWSGI configuration that comes with Nextcloud is not needed<br />
NoExtract = etc/uwsgi/nextcloud.ini}}<br />
<br />
== Web server ==<br />
<br />
There is an abundance of web servers you can choose from. Whatever option you finally pick you have to keep in mind that the Nextcloud application needs to be run with its own system user ''nextcloud''. So you will need to forward your requests to one of the above mentioned application servers.<br />
<br />
=== nginx ===<br />
<br />
Configuration of ''nginx'' is way beyond the scope of this article. See the relevant [[nginx|article]] for further information. Also consult [https://docs.nextcloud.com/server/latest/admin_manual/installation/nginx.html Nextcloud's documentation] for an elaborated configuration. Most likely you will have to copy it into a file with an appropriate name below {{ic|/etc/nginx/sites-available}} and create the corresponding symbolic link in {{ic|/etc/nginx/sites-enabled}}.<br />
<br />
The usage of the block {{ic|upstream php-handler { ... } }} is not necessary. Just specify {{ic|fastcgi_pass unix:/run/php-fpm/nextcloud.sock;}} in the {{ic|location}} block that deals with forwarding request with PHP URIs to the application server. When using ''uWSGI'' instead of ''php-fpm'' replace this {{ic|location}} block with:<br />
<br />
{{bc|<br />
location ~ \.php(?:${{!}}/) {<br />
include uwsgi_params;<br />
uwsgi_modifier1 14;<br />
# Avoid duplicate headers confusing OC checks<br />
uwsgi_hide_header X-Frame-Options;<br />
uwsgi_hide_header X-XSS-Protection;<br />
uwsgi_hide_header X-Content-Type-Options;<br />
uwsgi_hide_header X-Robots-Tag;<br />
uwsgi_hide_header X-Download-Options;<br />
uwsgi_hide_header X-Permitted-Cross-Domain-Policies;<br />
uwsgi_pass unix:/run/uwsgi/nextcloud.sock;<br />
}<br />
}}<br />
<br />
Things you might have to adapt (not exhaustive):<br />
<br />
* Your server name ({{ic|server_name}} clauses 2x), i.e. the server part of the URL your Nextcloud installation will be reachable with.<br />
* The name of the certificate and key you use for SSL / TLS.<br />
* If and where you want an access log written to.<br />
* The location where [[Certbot]] (or any other ACME client) will put the domain verification challenges. Usage of {{ic|alias}} instead of {{ic|try_files}} is probably more adequate here.<br />
* The path used to reach your Nextcloud installation. (The part right to the server name &amp; port section in the URL.)<br />
* What application server (uWSGI or php-fpm) you are using, i.e. how and where nginx will pass requests that need to trigger some PHP code. (See above.)<br />
* Configure [[Wikipedia:OCSP_stapling|OCSP stapling]].<br />
<br />
There is no need to install any additional modules since nginx natively supports both protocols FastCGI and uwsgi.<br />
<br />
=== Apache httpd ===<br />
<br />
Find lots of useful information in the article about the [[Apache_HTTP_Server|Apache HTTP Server]]. Nextcloud's documentation has some [https://docs.nextcloud.com/server/latest/admin_manual/installation/source_installation.html#apache-web-server-configuration sample configuration] that can also be found in {{ic|/usr/share/doc/nextcloud/apache.example.conf}}. Both implicitely rely on ''mod_php'' that cannot be used anymore. ''mod_proxy_fcgi'' or ''mod_proxy_uwsgi'' need to be applied.<br />
<br />
Information about how to [[Apache_HTTP_Server#Using_php-fpm_and_mod_proxy_fcgi|integrate Apache with php-fpm]] can be found here in this wiki. uWSGI's documentation has some information about how to [https://uwsgi-docs.readthedocs.io/en/latest/Apache.html integrate Apache with PHP by means of uWSGI and mod_proxy_uwsgi]. Mind that the Apache package comes with both modules ''mod_proxy_fcgi'' and ''mod_proxy_uwsgi''. They need to be loaded as required.<br />
<br />
The following Apache modules are required to run Nextcloud:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
# these are already loaded in a standard Apache installation<br />
LoadModule headers_module modules/mod_headers.so<br />
LoadModule env_module modules/mod_env.so<br />
LoadModule dir_module modules/mod_dir.so<br />
LoadModule mime_module modules/mod_mime.so<br />
LoadModule setenvif_module modules/mod_setenvif.so<br />
<br />
# these need to be uncommented explicitely<br />
LoadModule rewrite_module modules/mod_rewrite.so<br />
LoadModule ssl_module modules/mod_ssl.so<br />
LoadModule socache_shmcb_module modules/mod_socache_shmcb.so<br />
LoadModule proxy_module modules/mod_proxy.so<br />
<br />
# either this one in case you use php-fpm<br />
LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so<br />
# or this one in case you opt for uWSGI<br />
LoadModule proxy_uwsgi_module modules/mod_proxy_uwsgi.so<br />
}}<br />
<br />
Also uncomment the following directive to pull in TLS configuration parameters:<br />
<br />
{{hc|/etc/httpd/conf/httpd.conf|<br />
Include conf/extra/httpd-ssl.conf}}<br />
<br />
Consult [https://ssl-config.mozilla.org/#server=apache&config=intermediate Mozilla's SSL configurator] for details about how to optimize your TLS configuration.<br />
<br />
Refer to the following two sample configuration files depending on how you want to access your Nextcloud installation:<br />
<br />
* In case your Nextcloud installation is accessed via a dedicated host name (e.g. <nowiki>https://cloud.example.com/</nowiki>) put [https://gist.github.com/wolegis/1659786ded9128935f638ee2bf228906 this] fragment into {{ic|/etc/httpd/conf/extra/httpd-vhosts.conf}}.<br />
<br />
* In case your Nextcloud installation is located in a subfolder of your web site (e.g. <nowiki>https://www.example.com/nextcloud/</nowiki>) put [https://gist.github.com/wolegis/002e198c2db7980a84fd8d160c2bdb9a this] fragment in your {{ic|/etc/httpd/conf/httpd.conf}}.<br />
<br />
Of course you must adapt these sample configuration files to your concrete setup. Replace the {{ic|SetHandler}} directive by {{ic|SetHandler "proxy:unix:/run/uwsgi/nextcloud.sock{{!}}uwsgi://nextcloud/"}} when you use ''uWSGI''.<br />
<br />
The Nextcloud package comes with a {{ic|.htaccess}} that already takes care of a lot of rewriting and header stuff. Run {{ic|occ maintenance:update:htaccess}} to adapt this file. Parameter {{ic|htaccess.RewriteBase}} in {{ic|/etc/webapps/nextcloud/config/config.php}} is vital for this.<br />
<br />
== Background jobs ==<br />
<br />
Nextcloud requires certain tasks to be run on a scheduled basis. See Nextcloud's [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html documentation] for some details. The easiest (and most reliable) way to set up these background jobs is to use the systemd service and timer units that are already installed by {{Pkg|nextcloud}}. The service unit needs some tweaking so that the job uses the correct PHP ini-file (and not the global {{ic|php.ini}}). Create a [[drop-in file]] (e.g. with [[systemctl edit]]) and add:<br />
<br />
{{hc|/etc/systemd/system/nextcloud-cron.service.d/override.conf|2=<br />
[Service]<br />
ExecStart=<br />
ExecStart=/usr/bin/php -c /etc/webapps/nextcloud/php.ini -f /usr/share/webapps/nextcloud/cron.php<br />
}}<br />
<br />
After that [[enable/start]] {{ic|nextcloud-cron.timer}} (not the service).<br />
<br />
== In-memory caching ==<br />
<br />
Nextcloud's [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/caching_configuration.html documentation] recommends to apply some kind of in-memory object cache to significantly improve performance. This section demonstrates setup of [https://pecl.php.net/package/APCu APCu] - mainly to pinpoint the details that differ from the instructions given in Nextcloud's documentation. The other options ([https://redis.io/ Redis] and [https://www.memcached.org/ memcached]) are also sufficiently covered there.<br />
<br />
Install {{Pkg|php-apcu}} (as dependency {{ic|--asdeps}}). Enable the extension in the relevant configuration files. These are<br />
<br />
* {{ic|/etc/webapps/nextcloud/php.ini}} used by the {{ic|occ}} command and the background jobs and<br />
* depending on the application server you use either<br />
** {{ic|/etc/uwsgi/nextcloud.ini}} in case you use ''uWSGI'' or<br />
** {{ic|/etc/php/php-fpm.d/nextcloud.conf}} in case you use ''php-fpm''.<br />
<br />
The parameter to do so is already there and only needs to be uncommented. Two other configuration parameters related to ''APCu'' are also already there.<br />
<br />
Restart your application server (not the web server as Nextcloud's documentation claims). Add the following line to your Nextcloud configuration file:<br />
<br />
{{hc|/etc/webapps/nextcloud/config/config.php|2='memcache.local' => '\OC\Memcache\APCu',}}<br />
<br />
That's it. Enjoy your performance boost!<br />
<br />
{{Note|Mind that [https://github.com/nextcloud/notify_push push notify] (the Nextcloud service that replaces client polling by notification by the server thus drastically reducing sync latency) is dependend on Redis.}}<br />
<br />
== Security Hardening ==<br />
<br />
See the [https://docs.nextcloud.com/server/latest/admin_manual/installation/harden_server.html Nextcloud documentation] and [[Security]]. Nextcloud additionally provides a [https://scan.nextcloud.com/ Security scanner].<br />
<br />
== Synchronization ==<br />
<br />
{{Tip|With all software automatically authenticating against your Nextcloud server it is recommended not to use your personal password but so-called app tokens. This way in case you suspect credentials in one of the programs may have leaked you just need to revoke this one affected app token (instead of having to change your personal password and propagating this change to all places where it has been stored). You can generate a new app token in Nextcloud's web GUI in ''Settings'' &rarr; ''Security'' in the ''Devices & Sessions'' section.}}<br />
<br />
=== Desktop ===<br />
<br />
The official client can be installed with the {{Pkg|nextcloud-client}} package. Alternative versions are available in the [[AUR]]: {{AUR|nextcloud-client-git}}. Please keep in mind that it is not supported to use the {{Pkg|owncloud-client}} with Nextcloud.<br />
<br />
The desktop client basically syncs one or more directories of your desktop computer with corresponding folders in your Nextcloud's file service. It integrates nicely with your desktop's file manager (Dolphin in KDE Plasma, Nautilus in Gnome) displaying overlays representing synchronization and share status. The context menu of each file gets an additional entry ''Nextcloud'' to manage sharing of this file and getting the public or internal share link. Nextcloud's documentation has an own [https://docs.nextcloud.com/desktop/ volume] exclusively about the desktop client.<br />
<br />
{{Expansion|As of 2022-02-06 (desktop client version 3.4.1) the following additional packages do not seem to be necessary any more. Please elaborate in case they still are.}}<br />
<br />
Additional packages are needed for some features:<br />
<br />
* '''Auto-login:''' All of them use {{Pkg|qtkeychain-qt5}} to store and retrieve account-specific access tokens. To achieve auto-login when the client starts, one of optional dependencies of ''qtkeychain'' should be installed as well. Moreover, if you choose {{Pkg|libsecret}} as the backend for ''qtkeychain'', a service that provides [https://archlinux.org/packages/?q=org.freedesktop.secrets org.freedesktop.secrets] should be running when the client starts.<br />
* '''File manager integration:''' for {{Pkg|nextcloud-client}}, integration with file managers (e.g., show Nextcloud folders in GTK+ file dialogs) requires another package {{Pkg|nextcloud-client-cloudproviders}}. {{Pkg|nextcloud-client}} already includes cloudproviders supports by default.<br />
<br />
=== Thunderbird ===<br />
<br />
Since version 91 [[Thunderbird]] fully supports CalDAV and CardDAV - even with auto detection (i.e. you do not have to provide long URLs to access your calendars and address books). Nextcloud's [https://docs.nextcloud.com/server/latest/user_manual/en/pim/sync_thunderbird.html documentation] is not up to date in this respect.<br />
<br />
==== Calendar ====<br />
<br />
There are a few ways how to start the ''new calendar'' wizard. One is via the main menu (&#x2630; at the very right) &rarr; &#x2795; ''New'' &rarr; ''Calendar&hellip;'' Choose ''Network'' and click ''Next''. On the next page enter your username (that on your Nextcloud server) and the top URL of your Nextcloud server (e.g. {{ic|https://cloud.mysite.org/}}). Click ''Search calendar''. Now provide your password (or even better an app token (see above)). Finally select the calendar(s) you want to see in Thunderbird and click ''Subscribe''. Be sure to mark read-only calendars (e.g. Nextcloud's birtday calendar) as read-only. Otherwise you will repeatably see reminders that you cannot effectively close.<br />
<br />
==== Contacts ====<br />
<br />
Open Thunderbird's address book - e.g. by ''Shift+Ctrl+B''. Choose ''File'' &rarr; ''New'' &rarr; ''CardDAV address book''. On the next page enter your username (that on your Nextcloud server) and the top URL of your Nextcloud server (e.g. {{ic|https://cloud.mysite.org/}}). Click ''Next''. Now provide your password (or even better an app token (see above)). Finally select the address book(s) you want to set in Thunderbird's address book window and click ''Next''.<br />
<br />
=== Mounting files with davfs2 ===<br />
<br />
If you want to mount your Nextcloud using WebDAV, install {{AUR|davfs2}} (as described in [[davfs2]]).<br />
<br />
To mount your Nextcloud, use:<br />
<br />
# mount -t davfs https://''your_domain''/nextcloud/remote.php/dav/files/''username''/ /path/to/mount<br />
<br />
You can also create an entry for this in {{ic|/etc/fstab}}<br />
<br />
{{hc|/etc/fstab|<br />
https://''your_domain''/nextcloud/remote.php/dav/files/''username''/ /path/to/mount davfs rw,user,noauto 0 0<br />
}}<br />
<br />
{{Tip|In order to allow automount you can also store your username (and password if you like) in a file as described in [[davfs2#Storing credentials]].}}<br />
<br />
{{Note|If creating/copying files is not possible, while the same operations work on directories, see [[davfs2#Creating/copying files not possible and/or freezes]].}}<br />
<br />
=== Mounting files in GNOME Files (Nautilus) ===<br />
<br />
You can access the files directly in Nautilus ('+ Other Locations') through WebDAV protocol - use the link as shown in your Nextcloud installation Web GUI (typically: <nowiki>https://example.org/remote.php/webdav/</nowiki>) but replace the protocol name from 'https' to 'davs'. Nautilus will ask for user name and password when trying to connect.<br />
<br />
=== Android ===<br />
<br />
Download the official Nextcloud app from [https://play.google.com/store/apps/details?id=com.nextcloud.client Google Play] or [https://f-droid.org/packages/com.nextcloud.client/ F-Droid].<br />
<br />
To enable contacts and calendar sync (Android 4+):<br />
# download [https://www.davx5.com/ DAVx<sup>5</sup>] ([https://play.google.com/store/apps/details?id=at.bitfire.davdroid Play Store], [https://f-droid.org/app/at.bitfire.davdroid F-Droid])<br />
# Enable mod_rewrite.so in httpd.conf<br />
# create a new DAVdroid account in the ''Account'' settings, and specify your "short" server address and login/password couple, e.g. {{ic|<nowiki>https://cloud.example.com</nowiki>}} (there is no need for the {{ic|<nowiki>/remote.php/{carddav,webdav}</nowiki>}} part if you configured your web server with the proper redirections, as illustrated previously in the article; ''DAVdroid'' will find itself the right URLs)<br />
<br />
=== iOS ===<br />
<br />
Download the official Nextcloud app from the [https://itunes.apple.com/us/app/nextcloud/id1125420102 App Store].<br />
<br />
== Tips and tricks ==<br />
<br />
=== Using the {{ic|occ}} command line tool ===<br />
<br />
A useful tool for server administration is {{ic|occ}}. Refer to Nextcloud's documentation for [https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html details]. You can perform many common server operations with occ, such as managing users and configuring apps.<br />
<br />
A convenience wrapper around {{ic|/usr/share/webapps/nextcloud/occ}} is provided with {{ic|/usr/bin/occ}} which automatically runs as the default user (''nextcloud''), using the default [[PHP]] and PHP configuration file. The environment variables {{ic|NEXTCLOUD_USER}}, {{ic|NEXTCLOUD_PHP}} and {{ic|NEXTCLOUD_PHP_CONFIG}} can be used to specify a non-default user, PHP executable and PHP configuration file (respectively). Especially the latter (using {{ic|NEXTCLOUD_PHP_CONFIG}}) is necessary when Nextcloud was setup in a way as described in the sections [[#Configuration]] and [[#Application servers]], i.e. using PHP configurations specific to Nextcloud. In this case put {{ic|1=export NEXTCLOUD_PHP_CONFIG=/etc/webapps/nextcloud/php.ini}} in your {{ic|.bashrc}}.<br />
<br />
{{Warning| When using {{pkg|php-apcu}} for caching, make sure to set {{ic|1= apc.enable_cli=1}} in {{ic|/etc/webapps/nextcloud/php.ini}}, as the {{ic|occ}} command will otherwise run out of memory ({{Bug|69726}}).}}<br />
<br />
=== Pacman hook ===<br />
<br />
To automatically upgrade the Nextcloud database on package update, you can make use of the included [[pacman hook]]:<br />
<br />
# mkdir -vp /etc/pacman.d/hooks<br />
# ln -sv /usr/share/doc/nextcloud/nextcloud.hook /etc/pacman.d/hooks/<br />
<br />
{{Note| The packaged pacman hook implies, that the global {{ic|php.ini}} is used for the application.}}<br />
<br />
=== Running Nextcloud in a subdirectory ===<br />
<br />
The instructions in section [[#Web server|Web server]] will result in a setup where your Nextcloud installation is reachable via a dedicated server name, e.g. {{ic|cloud.mysite.com}}. If you would like to have Nextcloud located in a subdirectory. e.g. {{ic|www.mysite.com/nextcloud}}, then:<br />
<br />
* For nginx refer to the section in Nextcloud's documentation that explicitely covers this [https://docs.nextcloud.com/server/latest/admin_manual/installation/nginx.html#nextcloud-in-a-subdir-of-the-nginx-webroot topic].<br />
<br />
* For apache edit the {{ic|/etc/httpd/conf/extra/nextcloud.conf}} you included and comment out the {{ic|<nowiki><VirtualHost *:80> ... </VirtualHost></nowiki>}} part of the include file.<br />
<br />
{{Note| Do not forget to configure the {{ic |.well-known}} URLs for service discovery. For more information please see [https://docs.nextcloud.com/server/latest/admin_manual/issues/general_troubleshooting.html#service-discovery Service discovery] in Nextcloud's documentation.}}<br />
<br />
=== Docker ===<br />
<br />
See the [https://hub.docker.com/_/owncloud/ ownCloud] or [https://github.com/nextcloud/docker Nextcloud] repository for [[Docker]].<br />
<br />
=== Office integration ===<br />
<br />
There are currently three different solutions for office integration:<br />
<br />
* [https://www.collaboraoffice.com/collabora-online/ Collabora Online]<br />
* [https://www.onlyoffice.com/ ONLYOFFICE]<br />
* [https://docs.microsoft.com/en-us/officeonlineserver/office-online-server-overview MS Office Online Server]<br />
<br />
All three have in common that a dedicated server is required and your web server needs to be adapted to forward certain requests to the office service. The actual integration with Nextcloud is then accomplished by means of a Nextcloud app specific for one of the above products.<br />
<br />
Mind that all three products are aimed at businesses, i.e. you will have to pay for the office service. Only Collabora offers a developers plan ([https://www.collaboraoffice.com/code/ CODE]) for free. ONLYOFFICE offers a [https://www.onlyoffice.com/en/docs-enterprise-prices.aspx Home Server] plan for a reasonable price.<br />
<br />
For installation, setup instructions and integration with Nextcloud consult:<br />
<br />
* [https://nextcloud.com/collaboraonline/ Collabora online], [https://apps.nextcloud.com/apps/richdocuments app]<br />
* [https://api.onlyoffice.com/editors/nextcloud ONLYOFFICE], [https://apps.nextcloud.com/apps/onlyoffice app]<br />
* [https://github.com/nextcloud/officeonline MS Office Online Server], [https://apps.nextcloud.com/apps/officeonline app]<br />
<br />
=== Disabling app recommendations ===<br />
<br />
By default, nextcloud reccomends apps to new clients, which can result in a lot of notifications. To disable this, disable the recommendation app using {{ic|occ}}.<br />
<br />
=== Backup calendars and address books with calcardbackup ===<br />
<br />
The {{AUR|calcardbackup}} package can be installed and configured to provide regular backups of the calendar and/or address book databases. Edit {{ic|/etc/calcardbackup/calcardbackup.conf}} to your liking and then [[start]] and [[enable]] {{ic|calcardbackup.timer}}.<br />
<br />
== Troubleshooting ==<br />
<br />
By default, the logs of the web application are available in {{ic|/var/log/nextcloud/nextcloud.log}}.<br />
<br />
=== Issues with permissions and setup after upgrade to >= 21.0.0 ===<br />
<br />
{{Note| Before nextcloud 21.0.0, the web application was run using the {{ic|http}} user. This is a security concern in regards to cross-application access of this user (it has access to all data of all web applications).}}<br />
<br />
Since version 21.0.0 nextcloud more closely follows the [[web application package guidelines]]. This introduces the separate user {{ic|nextcloud}}, as which the web application is run.<br />
<br />
After an upgrade from nextcloud < 21.0.0 make sure that<br />
<br />
* the data directory is located at {{ic|/var/lib/nextcloud/data}}<br />
* the writable apps directory is located at {{ic|/var/lib/nextcloud/apps}}<br />
* both the data directory and the writable apps directory, alongside all files beneath them are writable and owned by the {{ic|nextcloud}} user<br />
* the web application configuration file resides in {{ic|/etc/webapps/nextcloud/config/}} and that that directory and its contents are writable and owned by the {{ic|nextcloud}} user<br />
* an application server, such as {{pkg|php-fpm}} or [[UWSGI]] is configured to run the web application as the {{ic|nextcloud}} user and not the {{ic|http}} user<br />
* update the cron job/systemd timer to run with the new user<br />
<br />
=== Login loop without any clue in access.log, error.log, nor nextcloud.log ===<br />
<br />
As mentioned in a [https://bbs.archlinux.org/viewtopic.php?pid=1967719#p1967719 post in the forum], this issue can be fixed by setting correct permissions on the ''sessions'' directory. (See [https://docs.nextcloud.com/server/stable/admin_manual/installation/nginx.html#login-loop-without-any-clue-in-access-log-error-log-nor-nextcloud-log Nextcloud's documentation] for details.) It is also possible that the ''sessions'' directory is missing altogether. The creation of this directory is documented in [[#System_and_environment|System and environment]].<br />
<br />
{{ic|/var/lib/nextcloud}} should look like this:<br />
<br />
drwxr-xr-x 6 nextcloud nextcloud 4096 17. Apr 00:56 ./<br />
drwxr-xr-x 21 root root 4096 17. Apr 00:53 ../<br />
drwxr-xr-x 2 nextcloud nextcloud 4096 16. Feb 00:21 apps/<br />
drwxrwx--- 10 nextcloud nextcloud 4096 16. Apr 13:46 data/<br />
drwx------ 2 nextcloud nextcloud 4096 17. Apr 01:04 sessions/<br />
<br />
=== Environment variables not available ===<br />
<br />
Depending on what application server you use custom environment variables can be provided to the Nextcloud's PHP code.<br />
<br />
'''php-fpm'''<br />
<br />
Add one or more lines in {{ic|/etc/php/php-fpm.d/nextcloud.conf}} as per [https://docs.nextcloud.com/server/latest/admin_manual/installation/source_installation.html#php-fpm-tips-label Nextcloud's documentation], e.g.:<br />
<br />
env[PATH] = /usr/local/bin:/usr/bin:/bin<br />
<br />
'''uwsgi'''<br />
<br />
Add one or more lines in {{ic|/etc/uwsgi/nextcloud.ini}}, e.g.:<br />
<br />
env = PATH=/usr/local/bin:/usr/bin:/bin<br />
<br />
Mind there must not be any blanks around the second {{ic|1==}}.<br />
<br />
=== Self-signed certificate not accepted ===<br />
<br />
{{Remove|Subsections below and including this one are considered outdated (e.g. mentioning of owncloud). In case you know a certain subsection still applies, please update it and move it above this marker. Any subsection not updated and moved above this marker by 2022-09-30 will be deleted.}}<br />
<br />
ownCloud uses [[Wikipedia:cURL]] and [[Wikipedia:SabreDAV]] to check if WebDAV is enabled.<br />
If you use SSL/TLS with a self-signed certificate, e.g. as shown in [[LAMP]], and access ownCloud's admin panel, you will see the following error message:<br />
<br />
Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken.<br />
<br />
Assuming that you followed the [[LAMP]] tutorial, execute the following steps:<br />
<br />
Create a local directory for non-distribution certificates and copy [[LAMP]]s certificate there. This will prevent {{ic|ca-certificates}}-updates from overwriting it.<br />
<br />
# cp /etc/httpd/conf/server.crt /usr/share/ca-certificates/''WWW.EXAMPLE.COM.crt''<br />
<br />
Add ''WWW.EXAMPLE.COM.crt'' to {{ic|/etc/ca-certificates.conf}}:<br />
<br />
''WWW.EXAMPLE.COM.crt''<br />
<br />
Now, regenerate your certificate store:<br />
<br />
# update-ca-certificates<br />
<br />
Restart the httpd service to activate your certificate.<br />
<br />
=== Self-signed certificate for Android devices ===<br />
<br />
Once you have followed the setup for SSL, as on [[Apache HTTP Server#TLS]] for example, early versions of DAVdroid will<br />
reject the connection because the certificate is not trusted. A certificate can be made as follows on your server:<br />
<br />
# openssl x509 -req -days 365 -in /etc/httpd/conf/server.csr -signkey /etc/httpd/conf/server.key -extfile android.txt -out CA.crt<br />
# openssl x509 -inform PEM -outform DER -in CA.crt -out CA.der.crt <br />
<br />
The file {{ic|android.txt}} should contain the following:<br />
<br />
basicConstraints=CA:true<br />
<br />
Then import {{ic|CA.der.crt}} to your Android device:<br />
<br />
Put the {{ic|CA.der.crt}} file onto the sdcard of your Android device (usually to the internal one, e.g. save from a mail attachment).<br />
It should be in the root directory. Go to ''Settings > Security > Credential storage'' and select ''Install from device storage''.<br />
The {{ic|.crt}} file will be detected and you will be prompted to enter a certificate name. After importing the certificate,<br />
you will find it in ''Settings > Security > Credential storage > Trusted credentials > User''.<br />
<br />
Thanks to: [https://web.archive.org/web/20150323082541/http://www.leftbrainthings.com/2013/10/13/creating-and-importing-self-signed-certificate-to-android-device/]<br />
<br />
Another way is to import the certificate directly from your server via [https://f-droid.org/en/packages/at.bitfire.cadroid/ CAdroid] and follow the instructions there.<br />
<br />
=== CSync failed to find a specific file. ===<br />
<br />
This is most likely a certificate issue. Recreate it, and do not leave the common name empty or you will see the error again.<br />
<br />
# openssl req -new -x509 -nodes -newkey rsa:4096 -keyout server.key -out server.crt<br />
<br />
=== Seeing white page after login ===<br />
<br />
The cause is probably a new app that you installed. To fix that, you can use the occ command as described<br />
[https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html here]. So with<br />
<br />
# sudo -u http php /usr/share/webapps/nextcloud/occ app:list<br />
<br />
you can list all apps (if you installed nextcloud in the standard directory), and with <br />
<br />
# sudo -u http php /usr/share/webapps/nextcloud/occ app:disable <nameOfExtension><br />
<br />
you can disable the troubling app.<br />
<br />
Alternatively, you can either use [[phpMyAdmin]] to edit the {{ic|oc_appconfig}} table (if you got lucky and the table has an edit option), or do it by hand with mysql:<br />
<br />
# mysql -u root -p owncloud<br />
MariaDB [owncloud]> '''delete from''' oc_appconfig '''where''' appid='<nameOfExtension>' '''and''' configkey='enabled' '''and''' configvalue='yes';<br />
MariaDB [owncloud]> '''insert into''' oc_appconfig (appid,configkey,configvalue) '''values''' ('<nameOfExtension>','enabled','no');<br />
<br />
This should delete the relevant configuration from the table and add it again.<br />
<br />
=== GUI sync client fails to connect ===<br />
<br />
If using HTTP basic authentication, make sure to exclude "status.php", which must be publicly accessible. [https://github.com/owncloud/mirall/issues/734]<br />
<br />
=== GUI tray icon disappears, but client still running in the background ===<br />
<br />
After waking up from a suspended state, the Nextcloud client tray icon may disappear from the system tray. A workaround is to delay the startup of the client, as noted [https://github.com/nextcloud/desktop/issues/203#issuecomment-463957811 here]. This can be done with the .desktop file, for example:<br />
<br />
{{hc|.local/share/applications/nextcloud.desktop|<nowiki><br />
...<br />
Exec=bash -c 'sleep 5 && nextcloud'<br />
...<br />
</nowiki>}}<br />
<br />
=== Some files upload, but give an error 'Integrity constraint violation...' ===<br />
<br />
You may see the following error in the ownCloud sync client:<br />
<br />
SQLSTATE[23000]: Integrity constraint violation: ... Duplicate entry '...' for key 'fs_storage_path_hash')...<br />
<br />
This is caused by an issue with the File Locking app, which is often not sufficient to keep conflicts from occurring on some webserver configurations.<br />
A more complete [https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/files_locking_transactional.html Transactional File Locking]<br />
is available that rids these errors, but you must be using the Redis php-caching method. Install {{Pkg|redis}} and {{Pkg|php-redis}}, comment out<br />
your current php-cache mechanism, and then in {{ic|/etc/php/conf.d/redis.ini}} uncomment {{ic|1=extension=redis}}.<br />
Then in {{ic|config.php}} make the following changes:<br />
<br />
'memcache.local' => '\OC\Memcache\Redis',<br />
'filelocking.enabled' => 'true',<br />
'memcache.locking' => '\OC\Memcache\Redis',<br />
'redis' => array(<br />
'host' => 'localhost',<br />
'port' => 6379,<br />
'timeout' => 0.0,<br />
),<br />
<br />
and [[start/enable]] {{ic|redis.service}}.<br />
<br />
Finally, disable the File Locking App, as the Transational File Locking will take care of it (and would conflict).<br />
<br />
If everything is working, you should see 'Transactional File Locking Enabled' under Server Status on the Admin page, and syncs should no longer cause issues.<br />
<br />
=== "Cannot write into apps directory" ===<br />
<br />
As mentioned in the [https://docs.nextcloud.com/server/latest/admin_manual/apps_management.html official admin manual],<br />
either you need an apps directory that is writable by the http user, or you need to set {{ic|appstoreenabled}} to {{ic|false}}. <br />
<br />
If you have set {{ic|open_basedir}} in your PHP/web server configuration file (e.g. {{ic|/etc/httpd/conf/extra/nextcloud.conf}}), it may be necessary to add your ''/path/to/data'' directory to the string on the line starting with {{ic|php_admin_value open_basedir }}:<br />
<br />
{{hc|/etc/httpd/conf/extra/nextcloud.conf|2=<br />
<br />
php_admin_value open_basedir "''/path/to/data/'':/srv/http/:/dev/urandom:/tmp/:/usr/share/pear/:/usr/share/webapps/nextcloud/:/etc/webapps/nextcloud"<br />
}}<br />
<br />
=== Installed apps get blocked because of MIME type error ===<br />
<br />
If you are putting your apps folder outside of the nextcloud installation directory make sure your webserver serves it properly.<br />
<br />
In nginx this is accomplished by adding a location block to the nginx configuration as the folder will not be included in it by default.<br />
<br />
location ~ /apps2/(.*)$ {<br />
alias /var/www/nextcloud/apps/$1;<br />
}<br />
<br />
=== CSS and JS resources blocked due to MIME type error ===<br />
<br />
If you load your Nextcloud web gui and it is missing styles etc. check the browser's console logs for lines like:<br />
<br />
<nowiki>The resource from “https://example.com/core/css/guest.css?v=72c34c37-0” was blocked due to MIME type (“text/plain”) mismatch (X-Content-Type-Options: nosniff).</nowiki><br />
<br />
There are a few possible reasons, possibly you have [https://docs.nextcloud.com/server/latest/admin_manual/installation/nginx.html#javascript-js-or-css-css-files-not-served-properly not included any mime types] in your {{ic|nginx.conf}} add the following to {{ic|nginx.conf}}<br />
<br />
types_hash_max_size 2048;<br />
types_hash_bucket_size 128;<br />
include mime.types;<br />
<br />
Here we use the {{ic|mime.types}} provided by {{Pkg|mailcap}}, due to the large number of types included we increase the allowed size of the types hash.<br />
<br />
Other possible reasons for these errors are missing permissions on the files. Make sure the files are owned by {{ic|http:http}} and can be read and written to by this user.<br />
<br />
=== Security warnings even though the recommended settings have been included in nginx.conf ===<br />
<br />
At the top of the admin page there might be a warning to set the {{ic|Strict-Transport-Security}}, {{ic|X-Content-Type-Options}},<br />
{{ic|X-Frame-Options}}, {{ic|X-XSS-Protection}} and {{ic|X-Robots-Tag}} according to https://docs.nextcloud.com/server/latest/admin_manual/installation/harden_server.html even though they are already set like that.<br />
<br />
A possible cause could be that because owncloud sets those settings, uwsgi passed them along and nginx added them again:<br />
<br />
{{hc|$ curl -I https://domain.tld|<nowiki><br />
...<br />
X-XSS-Protection: 1; mode=block<br />
X-Content-Type-Options: nosniff<br />
X-Frame-Options: Sameorigin<br />
X-Robots-Tag: none<br />
Strict-Transport-Security: max-age=15768000; includeSubDomains; preload;<br />
X-Content-Type-Options: nosniff<br />
X-Frame-Options: SAMEORIGIN<br />
X-XSS-Protection: 1; mode=block<br />
X-Robots-Tag: none<br />
</nowiki>}}<br />
<br />
While the fast_cgi sample configuration has a parameter to avoid that ( {{ic|fastcgi_param modHeadersAvailable true; #Avoid sending the security headers twice}} ), when using uwsgi and nginx the following modification of the uwsgi part in nginx.conf could help:<br />
<br />
{{hc| /etc/nginx/nginx.conf|<nowiki><br />
...<br />
# pass all .php or .php/path urls to uWSGI<br />
location ~ ^(.+\.php)(.*)$ {<br />
include uwsgi_params;<br />
uwsgi_modifier1 14;<br />
# hode following headers received from uwsgi, because otherwise we would send them twice since we already add them in nginx itself<br />
uwsgi_hide_header X-Frame-Options;<br />
uwsgi_hide_header X-XSS-Protection;<br />
uwsgi_hide_header X-Content-Type-Options;<br />
uwsgi_hide_header X-Robots-Tag;<br />
uwsgi_hide_header X-Frame-Options;<br />
#Uncomment line below if you get connection refused error. Remember to commet out line with "uwsgi_pass 127.0.0.1:3001;" below<br />
uwsgi_pass unix:/run/uwsgi/owncloud.sock;<br />
#uwsgi_pass 127.0.0.1:3001;<br />
}<br />
...<br />
</nowiki>}}<br />
<br />
=== "Reading from keychain failed with error: 'No keychain service available'" ===<br />
<br />
Can be fixed for Gnome by installing the following 2 packages, {{Pkg|libgnome-keyring}} and {{Pkg|gnome-keyring}}.<br />
Or the following for KDE, {{Pkg|libgnome-keyring}} and {{Pkg|qtkeychain-qt5}}.<br />
<br />
=== FolderSync: "Method Not Allowed" ===<br />
<br />
FolderSync needs access to {{ic|/owncloud/remote.php/webdav}}, so you could create another alias for owncloud in your {{ic|/etc/httpd/conf/extra/nextcloud.conf}}<br />
<br />
<IfModule mod_alias.c><br />
Alias /nextcloud /usr/share/webapps/nextcloud/<br />
Alias /owncloud /usr/share/webapps/nextcloud/<br />
</IfModule><br />
<br />
=== Log file spam ===<br />
<br />
{{Accuracy|This section was added 2022-03-08 without referencing an upstream bug, while it might still be relevant, we have no way to know if this issue has been fixed until a link to the issue is provided.}}<br />
<br />
The cause could be a too high PHP version. Until this is fixed, the log level in nextcloud's config.php can be adjusted.<br />
<br />
== See also ==<br />
<br />
* [https://docs.nextcloud.com/ Nextcloud Documentation Overview]<br />
* [https://docs.nextcloud.com/server/latest/admin_manual/ Nextcloud Admin Manual]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Bisecting_bugs_with_Git&diff=652727Bisecting bugs with Git2021-02-18T15:30:20Z<p>Veox: /* Ccache */ Fix typo.</p>
<hr />
<div>[[Category:Package management]]<br />
[[Category:Version Control System]]<br />
Often when reporting bugs encountered with projects such as Mesa or Linux kernel, a user may be asked to bisect between the last known version that worked for them and the newer version which is causing them problems in order to see what is the troublesome commit. On Arch this can be done fairly trivially thanks to the functionality of the [[AUR]].<br />
<br />
== Reverting to an older release ==<br />
<br />
It might be useful to confirm that it is the new package release that is causing the problem. [[Downgrading packages]] on Arch can be accomplished trivially as long as an older version of the package is still stored as cache on your system, or you can use [[Arch Linux Archive]].<br />
<br />
{{Note|Even if the older version fixes the problem it is still possible that is not a bug within the program, but a problem with the packages as provided by Arch.}}<br />
<br />
== Building package from git ==<br />
<br />
In order to bisect we are going to need to build a version of package from [[git]]. This can be accomplished by building the ''-git'' package from the [[AUR]].<br />
<br />
== Setting up the Bisect ==<br />
<br />
Once package is successfully built you need to change into the git root directory in the {{ic|src/}} directory. The name of the git root directory is often the same as {{ic|''pkgname''}} (or without the {{ic|-git}} suffix):<br />
<br />
$ cd src/''git_root''<br />
<br />
From there you can start the process of bisecting:<br />
<br />
$ git bisect start<br />
<br />
The following command will show you all the tags you can use to specify where to bisect:<br />
<br />
$ git tag<br />
<br />
Following on from the earlier example, we will assume that the version ''oldver'' worked for us while ''newver'' did not:<br />
<br />
$ git bisect good ''oldver''<br />
$ git bisect bad ''newver''<br />
<br />
Now that we have our good and bad versions tagged we can proceed to test commits.<br />
<br />
== Bisecting ==<br />
<br />
Change back into the directory with the PKGBUILD. If you are still in the directory mentioned in the previous section this can be accomplished like so:<br />
<br />
$ cd ../..<br />
<br />
You can now rebuild and install the specific revision of the package:<br />
<br />
$ makepkg -efsi<br />
<br />
{{Note|It is very important to keep the {{ic|-e}} prefix intact as otherwise it will remove all the changes you have made.}}<br />
<br />
Once the new package is installed you can test for your previously discovered error. Return to the directory you were in the previous section:<br />
<br />
$ cd src/''git_root''<br />
<br />
If you encountered your problem, tell that the revision was bad:<br />
<br />
$ git bisect bad<br />
<br />
If you did not encounter your problem, tell that the revision it was good:<br />
<br />
$ git bisect good<br />
<br />
Then do as described at the beginning of this section again and repeat until git bisect names the troublesome commit.<br />
<br />
{{Note|<br />
* You may need to run a make clean after issuing the git bisect command.<br />
* It will actually count down the number of steps all the way down to zero, so it is important not to stop until it actually names the first bad commit.<br />
}}<br />
<br />
== Speeding up builds ==<br />
<br />
=== Building smaller kernel ===<br />
<br />
You can shorten kernel build times by building only the modules required by the local system using [[modprobed-db]], or by {{ic|make localmodconfig}}. Of course you can completely drop irrelevant drivers, for example sound drivers to debug a network problem.<br />
<br />
=== Ccache ===<br />
<br />
If you are bisecting a large project built using {{ic|gcc}}, it might be possible to reduce build times by enabling [[ccache]]. It may take several build iterations before you start to see benefits from the cache, however. The likelihood of cache hits generally increases as the distance between bisection points decreases.<br />
<br />
{{Note|Ccache is effective ''only when'' compiling ''exactly identical'' sources. And to bisect the kernel, it is ''not'' necessary to do {{ic|make clean}}, meaning ccache is a complete waste.}}<br />
<br />
== Restoring package ==<br />
<br />
Reverting to an original version of the package can be done by installing the package from repositories with [[pacman]].<br />
<br />
== See also ==<br />
<br />
* [http://git-scm.com/docs/git-bisect-lk2009.html Fighting regressions with git bisect]<br />
* {{man|1|git-bisect}}<br />
* [[Gentoo:Kernel git-bisect]]</div>Veoxhttps://wiki.archlinux.org/index.php?title=RAID&diff=525073RAID2018-06-07T10:29:10Z<p>Veox: /* GUID Partition Table */ typo: fylesystem -> filesystem</p>
<hr />
<div>[[Category:Storage virtualization]]<br />
[[es:RAID]]<br />
[[it:RAID]]<br />
[[ja:RAID]]<br />
[[ru:RAID]]<br />
[[zh-hans:RAID]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|LVM#RAID}}<br />
{{Related|Installing with Fake RAID}}<br />
{{Related|Convert a single drive system to RAID}}<br />
{{Related|ZFS}}<br />
{{Related|ZFS/Virtual disks}}<br />
{{Related|Swap#Striping}}<br />
{{Related|Btrfs#RAID}}<br />
{{Related articles end}}<br />
{{Style|Non-standard headers, other [[Help:Style]] issues}}<br />
<br />
Redundant Array of Independent Disks ([[Wikipedia:RAID|RAID]]) is a storage technology that combines multiple disk drive components (typically disk drives or partitions thereof) into a logical unit. Depending on the RAID implementation, this logical unit can be a file system or an additional transparent layer that can hold several partitions. Data is distributed across the drives in one of several ways called [[#RAID levels]], depending on the level of redundancy and performance required. The RAID level chosen can thus prevent data loss in the event of a hard disk failure, increase performance or be a combination of both.<br />
<br />
This article explains how to create/manage a software RAID array using mdadm.<br />
<br />
{{Warning|Be sure [[Backup programs|to back up]] all data before proceeding.}}<br />
<br />
== RAID levels ==<br />
<br />
Despite redundancy implied by most RAID levels, RAID does not guarantee that data is safe. A RAID will not protect data if there is a fire, the computer is stolen or multiple hard drives fail at once. Furthermore, installing a system with RAID is a complex process that may destroy data.<br />
<br />
=== Standard RAID levels ===<br />
<br />
There are many different [[Wikipedia:Standard RAID levels|levels of RAID]], please find hereafter the most commonly used ones.<br />
<br />
; [[Wikipedia:Standard RAID levels#RAID 0|RAID 0]]<br />
: Uses striping to combine disks. Even though it ''does not provide redundancy'', it is still considered RAID. It does, however, ''provide a big speed benefit''. If the speed increase is worth the possibility of data loss (for [[swap]] partition for example), choose this RAID level. On a server, RAID 1 and RAID 5 arrays are more appropriate. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.<br />
<br />
; [[Wikipedia:Standard RAID levels#RAID 1|RAID 1]]<br />
: The most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. The example will be using RAID 1 for everything except [[swap]] and temporary data. Please note that with a software implementation, the RAID 1 level is the only option for the boot partition, because bootloaders reading the boot partition do not understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.<br />
<br />
; [[Wikipedia:Standard RAID levels#RAID 5|RAID 5]]<br />
: Requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks ''distributed across each member disk''. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.<br />
: {{Note|RAID 5 is a common choice due to its combination of speed and data redundancy. The caveat is that if one drive were to fail and another drive failed before that drive was replaced, all data will be lost. Furthermore, with modern disk sizes and expected unrecoverable read error (URE) rates on consumer disks, the rebuild of a 4TiB array is '''expected''' (i.e. higher than 50% chance) to have at least one URE. Because of this, RAID 5 is no longer advised by the storage industry.}}<br />
<br />
; [[Wikipedia:Standard RAID levels#RAID 6|RAID 6]]<br />
: Requires 4 or more physical drives, and provides the benefits of RAID 5 but with security against two drive failures. RAID 6 also uses striping, like RAID 5, but stores two distinct parity blocks ''distributed across each member disk''. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 6 can withstand the loss of two member disks. The robustness against unrecoverable read errors is somewhat better, because the array still has parity blocks when rebuilding from a single failed drive. However, given the overhead, RAID 6 is costly and in most settings RAID 10 in far2 layout (see below) provides better speed benefits and robustness, and is therefore preferred.<br />
<br />
=== Nested RAID levels ===<br />
<br />
; [[Wikipedia:Nested RAID levels#RAID 10 (RAID 1+0)|RAID 1+0]]<br />
: RAID1+0 is a nested RAID that combines two of the standard levels of RAID to gain performance and additional redundancy. It is commonly referred to as ''RAID10'', however, Linux MD RAID10 is slightly different from simple RAID layering, see below.<br />
<br />
; [[Wikipedia:Non-standard RAID levels#Linux MD RAID 10|RAID 10]]<br />
: RAID10 under Linux is built on the concepts of RAID1+0, however, it implements this as a single layer, with multiple possible layouts.<br />
: The ''near X'' layout on Y disks repeats each chunk X times on Y/2 stripes, but does not need X to divide Y evenly. The chunks are placed on almost the same location on each disk they are mirrored on, hence the name. It can work with any number of disks, starting at 2. Near 2 on 2 disks is equivalent to RAID1, near 2 on 4 disks to RAID1+0.<br />
: The ''far X'' layout on Y disks is designed to offer striped read performance on a mirrored array. It accomplishes this by dividing each disk in two sections, say front and back, and what is written to disk 1 front is mirrored in disk 2 back, and vice versa. This has the effect of being able to stripe sequential reads, which is where RAID0 and RAID5 get their performance from. The drawback is that sequential writing has a very slight performance penalty because of the distance the disk needs to seek to the other section of the disk to store the mirror. RAID10 in far 2 layout is, however, preferable to layered RAID1+0 '''and''' RAID5 whenever read speeds are of concern and availability / redundancy is crucial. However, it is still not a substitute for backups. See the wikipedia page for more information.<br />
<br />
=== RAID level comparison ===<br />
<br />
{| class="wikitable"<br />
! RAID level!!Data redundancy!!Physical drive utilization!!Read performance!!Write performance!!Min drives<br />
|-<br />
| '''0'''||{{No}}||100%||nX<br />
<br />
'''Best'''<br />
||nX<br />
<br />
'''Best'''<br />
||2<br />
|-<br />
| '''1'''||{{Yes}}||50%||Up to nX if multiple processes are reading, otherwise 1X<br />
||1X||2<br />
|-<br />
| '''5'''||{{Yes}}||67% - 94%||(n−1)X<br />
<br />
'''Superior'''<br />
||(n−1)X<br />
<br />
'''Superior'''<br />
||3<br />
|-<br />
| '''6'''||{{Yes}}||50% - 88%||(n−2)X||(n−2)X||4<br />
|-<br />
| '''10,far2'''||{{Yes}}||50%||nX<br />
'''Best;''' on par with RAID0 but redundant<br />
||(n/2)X||2<br />
|-<br />
| '''10,near2'''||{{Yes}}||50%||Up to nX if multiple processes are reading, otherwise 1X||(n/2)X||2<br />
|}<br />
<br />
<nowiki>*</nowiki> Where ''n'' is standing for the number of dedicated disks.<br />
<br />
== Implementation ==<br />
<br />
The RAID devices can be managed in different ways:<br />
<br />
; Software RAID<br />
: This is the easiest implementation as it does not rely on obscure proprietary firmware and software to be used. The array is managed by the operating system either by:<br />
:* by an abstraction layer (e.g. [[#Installation|mdadm]]); {{Note|This is the method we will use later in this guide.}}<br />
:* by a logical volume manager (e.g. [[LVM#RAID|LVM]]);<br />
:* by a component of a file system (e.g. [[ZFS]], [[Btrfs#RAID|Btrfs]]).<br />
<br />
; Hardware RAID<br />
: The array is directly managed by a dedicated hardware card installed in the PC to which the disks are directly connected. The RAID logic runs on an on-board processor independently of [[Wikipedia:Central processing unit|the host processor (CPU)]]. Although this solution is independent of any operating system, the latter requires a driver in order to function properly with the hardware RAID controller. The RAID array can either be configured via an option rom interface or, depending on the manufacturer, with a dedicated application when the OS has been installed. The configuration is transparent for the Linux kernel: it does not see the disks separately.<br />
<br />
; [[Fakeraid|FakeRAID]]<br />
: This type of RAID is properly called BIOS or Onboard RAID, but is falsely advertised as hardware RAID. The array is managed by pseudo-RAID controllers where the RAID logic is implemented in an option rom or in the firmware itself [http://www.win-raid.com/t19f13-Intel-EFI-RAID-quot-SataDriver-quot-BIOS-Modules.html with a EFI SataDriver] (in case of [[UEFI]]), but are not full hardware RAID controllers with ''all'' RAID features implemented. Therefore, this type of RAID is sometimes called FakeRAID. {{Pkg|dmraid}} from the [[official repositories]], will be used to deal with these controllers. Here are some examples of FakeRAID controllers: [[Wikipedia:Intel Rapid Storage Technology|Intel Rapid Storage]], JMicron JMB36x RAID ROM, AMD RAID, ASMedia 106x, and NVIDIA MediaShield.<br />
<br />
=== Which type of RAID do I have? ===<br />
<br />
Since software RAID is implemented by the user, the type of RAID is easily known to the user.<br />
<br />
However, discerning between FakeRAID and true hardware RAID can be more difficult. As stated, manufacturers often incorrectly distinguish these two types of RAID and false advertising is always possible. The best solution in this instance is to run the {{ic|lspci}} command and looking through the output to find the RAID controller. Then do a search to see what information can be located about the RAID controller. Hardware RAID controllers appear in this list, but FakeRAID implementations do not. Also, true hardware RAID controller are often rather expensive, so if someone customized the system, then it is very likely that choosing a hardware RAID setup made a very noticeable change in the computer's price.<br />
<br />
== Installation ==<br />
<br />
[[Install]] {{Pkg|mdadm}}. ''mdadm'' is used for administering pure software RAID using plain block devices: the underlying hardware does not provide any RAID logic, just a supply of disks. ''mdadm'' will work with any collection of block devices. Even if unusual. For example, one can thus make a RAID array from a collection of thumb drives.<br />
<br />
=== Prepare the devices ===<br />
<br />
{{Warning|These steps erase everything on a device, so type carefully!}}<br />
<br />
If the device is being reused or re-purposed from an existing array, erase any old RAID configuration information:<br />
<br />
# mdadm --misc --zero-superblock /dev/<drive><br />
<br />
or if a particular partition on a drive is to be deleted:<br />
<br />
# mdadm --misc --zero-superblock /dev/<partition><br />
<br />
{{Note|<br />
* Zapping a partition's superblock should not affect the other partitions on the disk.<br />
* Due to the nature of RAID functionality it is very difficult to [[securely wipe disk]]s fully on a running array. Consider whether it is useful to do so before creating it.<br />
}}<br />
<br />
=== Partition the devices ===<br />
<br />
It is highly recommended to partition the disks to be used in the array. Since most RAID users are selecting disk drives larger than 2 TiB, GPT is required and recommended. See [[Partitioning]] for the more information on partitioning and the available [[partitioning tools]].<br />
<br />
{{Note|It is also possible to create a RAID directly on the raw disks (without partitions), but not recommended because it can cause problems when swapping a failed disk.}}<br />
<br />
{{Tip|When replacing a failed disk of a RAID, the new disk has to be exactly the same size as the failed disk or bigger — otherwise the array recreation process will not work. Even hard drives of the same manufacturer and model can have small size differences. By leaving a little space at the end of the disk unallocated one can compensate for the size differences between drives, which makes choosing a replacement drive model easier. Therefore, it is good practice to leave about 100 MiB of unallocated space at the end of the disk.}}<br />
<br />
==== GUID Partition Table ====<br />
<br />
* After creating the partitions, their [[Wikipedia:GUID Partition Table#Partition type GUIDs|partition type GUIDs]] should be {{ic|A19D880F-05FC-4D3B-A006-743F0F84911E}} (it can be assigned by selecting partition type {{ic|Linux RAID}} in ''fdisk'' or {{ic|FD00}} in ''gdisk'').<br />
* If a larger disk array is employed, consider assigning [[Persistent block device naming#by-label|filesystem labels]] or [[Persistent block device naming#by-partlabel|partition labels]] to make it easier to identify an individual disk later.<br />
* Creating partitions that are of the same size on each of the devices is recommended.<br />
<br />
==== Master Boot Record ====<br />
<br />
For those creating partitions on HDDs with a MBR partition table, the [[Wikipedia:Partition type|partition types IDs]] available for use are:<br />
<br />
* {{ic|0xFD}} for raid autodetect arrays ({{ic|Linux raid autodetect}} in ''fdisk'')<br />
* {{ic|0xDA}} for non-fs data ({{ic|Non-FS data}} in ''fdisk'')<br />
<br />
See [https://raid.wiki.kernel.org/index.php/Partition_Types Linux Raid Wiki:Partition Types] for more information.<br />
<br />
=== Build the array ===<br />
<br />
Use {{ic|mdadm}} to build the array. See {{man|8|mdadm}} for supported options. Several examples are given below.<br />
<br />
{{Warning|Do not simply copy/paste the examples below; make sure you use substitute the correct options and drive letters.}}<br />
<br />
{{Note|<br />
* If this is a RAID1 array which is intended to boot from [[Syslinux]] a limitation in syslinux v4.07 requires the metadata value to be 1.0 rather than the default of 1.2.<br />
* When creating an array from [[Archiso|Arch installation medium]] use the option {{ic|1=--homehost=''myhostname''}} (or {{ic|1=--homehost=any}} to always have the same name regardless of the host) to set the [[hostname]], otherwise the hostname {{ic|archiso}} will be written in the array metadata.<br />
}}<br />
<br />
{{Tip|You can specify a custom raid device name using the option {{ic|1=--name=''MyRAIDName''}} or by setting the raid device path to {{ic|/dev/md/''MyRAIDName''}}. Udev will create symlinks to the raid arrays in {{ic|/dev/md/}} using that name. If {{ic|homehost}} matches the current [[hostname]] (or if homehost is set to {{ic|any}}) the link will be {{ic|/dev/md/''name''}}, if the hostname does not match the link be {{ic|/dev/md/''homehost'':''name''}}.}}<br />
<br />
The following example shows building a 2-device RAID1 array:<br />
<br />
# mdadm --create --verbose --level=1 --metadata=1.2 --raid-devices=2 /dev/md/MyRAID1Array /dev/sdb1 /dev/sdc1<br />
<br />
The following example shows building a RAID5 array with 4 active devices and 1 spare device:<br />
<br />
# mdadm --create --verbose --level=5 --metadata=1.2 --chunk=256 --raid-devices=4 /dev/md/MyRAID5Array /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 --spare-devices=1 /dev/sdf1<br />
<br />
{{Tip|{{ic|--chunk}} is used to change the chunk size from the default value. See [http://www.zdnet.com/article/chunks-the-hidden-key-to-raid-performance/ Chunks: the hidden key to RAID performance] for more on chunk size optimisation.}}<br />
<br />
The following example shows building a RAID10,far2 array with 2 devices:<br />
<br />
# mdadm --create --verbose --level=10 --metadata=1.2 --chunk=512 --raid-devices=2 --layout=f2 /dev/md/MyRAID10Array /dev/sdb1 /dev/sdc1<br />
<br />
The array is created under the virtual device {{ic|/dev/mdX}}, assembled and ready to use (in degraded mode). One can directly start using it while mdadm resyncs the array in the background. It can take a long time to restore parity. Check the progress with:<br />
<br />
$ cat /proc/mdstat<br />
<br />
=== Update configuration file ===<br />
<br />
By default, most of {{ic|mdadm.conf}} is commented out, and it contains just the following:<br />
<br />
{{hc|/etc/mdadm.conf|<br />
...<br />
DEVICE partitions<br />
...<br />
}}<br />
<br />
This directive tells mdadm to examine the devices referenced by {{ic|/proc/partitions}} and assemble as many arrays as possible. This is fine if you really do want to start all available arrays and are confident that no unexpected superblocks will be found (such as after installing a new storage device). A more precise approach is to explicitly add the arrays to {{ic|/etc/mdadm.conf}}:<br />
<br />
# mdadm --detail --scan >> /etc/mdadm.conf<br />
<br />
This results in something like the following:<br />
<br />
{{hc|/etc/mdadm.conf|2=<br />
...<br />
DEVICE partitions<br />
...<br />
ARRAY /dev/md/MyRAID1Array metadata=1.2 name=pine:MyRAID1Array UUID=27664f0d:111e493d:4d810213:9f291abe<br />
}}<br />
<br />
This also causes mdadm to examine the devices referenced by {{ic|/proc/partitions}}. However, only devices that have superblocks with a UUID of {{ic|27664…}} are assembled in to active arrays.<br />
<br />
See {{man|5|mdadm.conf}} for more information.<br />
<br />
=== Assemble the array ===<br />
<br />
Once the configuration file has been updated the array can be assembled using mdadm:<br />
<br />
# mdadm --assemble --scan<br />
<br />
=== Format the RAID filesystem ===<br />
<br />
The array can now be formatted with a [[file system]] like any other partition, just keep in mind that:<br />
<br />
* Due to the large volume size not all filesystems are suited (see: [[Wikipedia:Comparison of file systems#Limits]]).<br />
* The filesystem should support growing and shrinking while online (see: [[Wikipedia:Comparison of file systems#Features]]).<br />
* One should calculate the correct stride and stripe-width for optimal performance.<br />
<br />
==== Calculating the stride and stripe width ====<br />
<br />
Two parameters are required to optimise the filesystem structure to fit optimally within the underlying RAID structure: the ''stride'' and ''stripe width''. These are derived from the RAID ''chunk size'', the filesystem ''block size'', and the ''number of "data disks"''.<br />
<br />
The chunk size is a property of the RAID array, decided at the time of its creation. {{ic|mdadm}}'s current default is 512 KiB. It can be found with {{ic|mdadm}}:<br />
<br />
# mdadm --detail /dev/mdX | grep 'Chunk Size'<br />
<br />
The block size is a property of the filesystem, decided at ''its'' creation. The default for many filesystems, including ext4, is 4 KiB. See {{ic|/etc/mke2fs.conf}} for details on ext4.<br />
<br />
The number of "data disks" is the minimum number of devices in the array required to completely rebuild it without data loss. For example, this is N for a raid0 array of N devices and N-1 for raid5.<br />
<br />
Once you have these three quantities, the stride and the stripe width can be calculated using the following formulas:<br />
<br />
stride = chunk size / block size<br />
stripe width = number of data disks * stride<br />
<br />
===== Example 1. RAID0 =====<br />
<br />
Example formatting to ext4 with the correct stripe width and stride:<br />
<br />
* Hypothetical RAID0 array is composed of 2 physical disks.<br />
* Chunk size is 64 KiB.<br />
* Block size is 4 KiB.<br />
<br />
stride = chunk size / block size. In this example, the math is 64/4 so the stride = 16.<br />
<br />
stripe width = # of physical '''data''' disks * stride. In this example, the math is 2*16 so the stripe width = 32.<br />
<br />
# mkfs.ext4 -v -L myarray -m 0.5 -b 4096 -E stride=16,stripe-width=32 /dev/md0<br />
<br />
===== Example 2. RAID5 =====<br />
<br />
Example formatting to ext4 with the correct stripe width and stride:<br />
<br />
* Hypothetical RAID5 array is composed of 4 physical disks; 3 data discs and 1 parity disc.<br />
* Chunk size is 512 KiB.<br />
* Block size is 4 KiB.<br />
<br />
stride = chunk size / block size. In this example, the math is 512/4 so the stride = 128.<br />
<br />
stripe width = # of physical '''data''' disks * stride. In this example, the math is 3*128 so the stripe width = 384.<br />
<br />
# mkfs.ext4 -v -L myarray -m 0.01 -b 4096 -E stride=128,stripe-width=384 /dev/md0<br />
<br />
For more on stride and stripe width, see: [http://wiki.centos.org/HowTos/Disk_Optimization RAID Math].<br />
<br />
===== Example 3. RAID10,far2 =====<br />
<br />
Example formatting to ext4 with the correct stripe width and stride:<br />
<br />
* Hypothetical RAID10 array is composed of 2 physical disks. Because of the properties of RAID10 in far2 layout, both count as data disks.<br />
* Chunk size is 512 KiB.<br />
<br />
{{hc|# mdadm --detail /dev/md0 {{!}} grep 'Chunk Size'|<br />
Chunk Size : 512K<br />
}}<br />
<br />
* Block size is 4 KiB.<br />
<br />
stride = chunk size / block size.<br />
In this example, the math is 512/4 so the stride = 128.<br />
<br />
stripe width = # of physical '''data''' disks * stride.<br />
In this example, the math is 2*128 so the stripe width = 256.<br />
<br />
# mkfs.ext4 -v -L myarray -m 0.01 -b 4096 -E stride=128,stripe-width=256 /dev/md0<br />
<br />
== Mounting from a Live CD ==<br />
<br />
Users wanting to mount the RAID partition from a Live CD, use:<br />
<br />
# mdadm --assemble /dev/<disk1> /dev/<disk2> /dev/<disk3> /dev/<disk4><br />
<br />
If your RAID 1 that is missing a disk array was wrongly auto-detected as RAID 1 (as per {{ic|mdadm --detail /dev/md<number>}}) and reported as inactive (as per {{ic|cat /proc/mdstat}}), stop the array first:<br />
<br />
# mdadm --stop /dev/md<number><br />
<br />
== Installing Arch Linux on RAID ==<br />
<br />
{{Note|The following section is applicable only if the root filesystem resides on the array. Users may skip this section if the array holds a data partition(s).}}<br />
<br />
You should create the RAID array between the [[Partitioning]] and [[File systems#Create a file system|formatting]] steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created on a RAID array.<br />
Follow the section [[#Installation]] to create the RAID array. Then continue with the installation procedure until the pacstrap step is completed.<br />
When using [[Unified Extensible Firmware Interface|UEFI boot]], also read [[EFI System Partition#ESP on RAID]].<br />
<br />
=== Update configuration file ===<br />
<br />
{{Note|This should be done outside of the chroot, hence the prefix {{ic|/mnt}} to the filepath.}}<br />
<br />
After the base system is installed the default configuration file, {{ic|mdadm.conf}}, must be updated like so:<br />
<br />
# mdadm --detail --scan >> /mnt/etc/mdadm.conf<br />
<br />
Always check the {{ic|mdadm.conf}} configuration file using a text editor after running this command to ensure that its contents look reasonable.<br />
<br />
{{Note|To prevent failure of {{ic|mdmonitor.service}} at boot (enabled by default), you will need to uncomment {{ic|MAILADDR}} and provide an e-mail address and/or application to handle notification of problems with your array at the bottom of {{ic|mdadm.conf}}. See [[#Mailing on events]].}}<br />
<br />
Continue with the installation procedure until you reach the step [[Installation guide#Initramfs]], then follow the next section.<br />
<br />
=== Configure mkinitcpio ===<br />
<br />
{{Note|This should be done whilst chrooted.}}<br />
<br />
Add {{ic|mdadm_udev}} to the [[mkinitcpio#HOOKS|HOOKS]] section of the {{ic|mkinitcpio.conf}} to add support for mdadm into the initramfs image:<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
...<br />
HOOKS=(base udev autodetect keyboard modconf block '''mdadm_udev''' filesystems fsck)<br />
...<br />
}}<br />
<br />
If you use the {{ic|mdadm_udev}} hook with a FakeRAID array, it is recommended to include ''mdmon'' in the [[mkinitcpio#BINARIES and FILES|BINARIES]] array:<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
...<br />
BINARIES=('''mdmon''')<br />
...<br />
}}<br />
<br />
Then [[Regenerate the initramfs]].<br />
<br />
See also [[mkinitcpio#Using RAID]].<br />
<br />
=== Configure the boot loader ===<br />
<br />
Point the {{ic|root}} parameter to the mapped device. E.g.:<br />
<br />
root=/dev/md/''MyRAIDArray''<br />
<br />
If booting from a software raid partition fails using the kernel device node method above, an alternative way is to use one of the methods from [[Persistent block device naming]], for example:<br />
<br />
root=LABEL=Root_Label<br />
<br />
See also [[GRUB#RAID]].<br />
<br />
== RAID Maintenance ==<br />
<br />
=== Scrubbing ===<br />
<br />
It is good practice to regularly run data [[wikipedia:Data_scrubbing|scrubbing]] to check for and fix errors. Depending on the size/configuration of the array, a scrub may take multiple hours to complete.<br />
<br />
To initiate a data scrub:<br />
<br />
# echo check > /sys/block/md0/md/sync_action<br />
<br />
The check operation scans the drives for bad sectors and automatically repairs them. If it finds good sectors that contain bad data (the data in a sector does not agree with what the data from another disk indicates that it should be, for example the parity block + the other data blocks would cause us to think that this data block is incorrect), then no action is taken, but the event is logged (see below). This "do nothing" allows admins to inspect the data in the sector and the data that would be produced by rebuilding the sectors from redundant information and pick the correct data to keep.<br />
<br />
As with many tasks/items relating to mdadm, the status of the scrub can be queried by reading {{ic|/proc/mdstat}}.<br />
<br />
Example:<br />
<br />
{{hc|$ cat /proc/mdstat|<nowiki><br />
Personalities : [raid6] [raid5] [raid4] [raid1]<br />
md0 : active raid1 sdb1[0] sdc1[1]<br />
3906778112 blocks super 1.2 [2/2] [UU]<br />
[>....................] check = 4.0% (158288320/3906778112) finish=386.5min speed=161604K/sec<br />
bitmap: 0/30 pages [0KB], 65536KB chunk<br />
</nowiki>}}<br />
<br />
To stop a currently running data scrub safely:<br />
<br />
# echo idle > /sys/block/md0/md/sync_action<br />
<br />
{{Note|If the system is rebooted after a partial scrub has been suspended, the scrub will start over.}}<br />
<br />
When the scrub is complete, admins may check how many blocks (if any) have been flagged as bad:<br />
<br />
# cat /sys/block/md0/md/mismatch_cnt<br />
<br />
==== General notes on scrubbing ====<br />
<br />
{{Note|Users may alternatively echo '''repair''' to {{ic|/sys/block/md0/md/sync_action}} but this is ill-advised since if a mismatch in the data is encountered, it would be automatically updated to be consistent. The danger is that we really do not know whether it is the parity or the data block that is correct (or which data block in case of RAID1). It is luck-of-the-draw whether or not the operation gets the right data instead of the bad data.}}<br />
<br />
It is a good idea to set up a cron job as root to schedule a periodic scrub. See {{AUR|raid-check}} which can assist with this. To perform a periodic scrub using systemd timers instead of cron, See {{AUR|raid-check-systemd}} which contains the same script along with associated systemd timer unit files.<br />
<br />
{{Note|For typical platter drives, scrubbing can take approximately '''six seconds per gigabyte''' (that is one hour forty-five minutes per terabyte) so plan the start of your cron job or timer appropriately.}}<br />
<br />
==== RAID1 and RAID10 notes on scrubbing ====<br />
<br />
Due to the fact that RAID1 and RAID10 writes in the kernel are unbuffered, an array can have non-0 mismatch counts even when the array is healthy. These non-0 counts will only exist in transient data areas where they do not pose a problem. However, we cannot tell the difference between a non-0 count that is just in transient data or a non-0 count that signifies a real problem. This fact is a source of false positives for RAID1 and RAID10 arrays. It is however still recommended to scrub regularly in order to catch and correct any bad sectors that might be present in the devices.<br />
<br />
=== Removing devices from an array ===<br />
<br />
One can remove a device from the array after marking it as faulty:<br />
<br />
# mdadm --fail /dev/md0 /dev/sdxx<br />
<br />
Now remove it from the array:<br />
<br />
# mdadm --remove /dev/md0 /dev/sdxx<br />
<br />
Remove device permanently (for example, to use it individually from now on):<br />
Issue the two commands described above then:<br />
<br />
# mdadm --zero-superblock /dev/sdxx<br />
<br />
{{Warning|<br />
* Do not issue this command on linear or RAID0 arrays or data loss will occur!<br />
* Reusing the removed disk without zeroing the superblock will cause loss of all data on the next boot. (After mdadm will try to use it as the part of the raid array).<br />
}}<br />
<br />
Stop using an array:<br />
<br />
# Umount target array<br />
# Stop the array with: {{ic|mdadm --stop /dev/md0}}<br />
# Repeat the three command described in the beginning of this section on each device.<br />
# Remove the corresponding line from {{ic|/etc/mdadm.conf}}.<br />
<br />
=== Adding a new device to an array ===<br />
<br />
Adding new devices with mdadm can be done on a running system with the devices mounted.<br />
Partition the new device using the same layout as one of those already in the arrays as discussed above.<br />
<br />
Assemble the RAID array if it is not already assembled:<br />
<br />
# mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1<br />
<br />
Add the new device the array:<br />
<br />
# mdadm --add /dev/md0 /dev/sdc1<br />
<br />
This should not take long for mdadm to do. Again, check the progress with:<br />
<br />
# cat /proc/mdstat<br />
<br />
Check that the device has been added with the command:<br />
<br />
# mdadm --misc --detail /dev/md0<br />
<br />
{{Note|For RAID0 arrays you may get the following error message:<br />
<br />
mdadm: add new device failed for /dev/sdc1 as 2: Invalid argument<br />
<br />
This is because the above commands will add the new disk as a "spare" but RAID0 does not have spares. If you want to add a device to a RAID0 array, you need to "grow" and "add" in the same command. This is demonstrated below:<br />
<br />
# mdadm --grow /dev/md0 --raid-devices<nowiki>=</nowiki>3 --add /dev/sdc1<br />
<br />
}}<br />
<br />
=== Increasing size of a RAID volume ===<br />
<br />
If larger disks are installed in a RAID array or partition size has been increased, it may be desirable to increase the size of the RAID volume to fill the larger available space. This process may be begun by first following the above sections pertaining to replacing disks. Once the RAID volume has been rebuilt onto the larger disks it must be "grown" to fill the space.<br />
<br />
# mdadm --grow /dev/md0 --size=max<br />
<br />
Next, partitions present on the RAID volume {{ic|/dev/md0}} may need to be resized. See [[Partitioning]] for details. Finally, the filesystem on the above mentioned partition will need to be resized. If partitioning was performed with {{ic|gparted}} this will be done automatically. If other tools were used, unmount and then resize the filesystem manually.<br />
<br />
# umount /storage<br />
# fsck.ext4 -f /dev/md0p1<br />
# resize2fs /dev/md0p1<br />
<br />
=== Change sync speed limits ===<br />
<br />
Syncing can take a while. If the machine is not needed for other tasks the speed limit can be increased.<br />
<br />
{{hc|# cat /proc/mdstat|<nowiki><br />
Personalities : [raid1]<br />
md0 : active raid1 sda3[2] sdb3[1]<br />
155042219 blocks super 1.2 [2/1] [_U]<br />
[>....................] recovery = 0.0% (77696/155042219) finish=265.8min speed=9712K/sec<br />
<br />
unused devices: <none><br />
</nowiki>}}<br />
<br />
Check the current speed limit.<br />
<br />
{{hc|# cat /proc/sys/dev/raid/speed_limit_min|<br />
1000<br />
}}<br />
<br />
{{hc|# cat /proc/sys/dev/raid/speed_limit_max|<br />
200000<br />
}}<br />
<br />
Increase the limits.<br />
<br />
# echo 400000 >/proc/sys/dev/raid/speed_limit_min<br />
# echo 400000 >/proc/sys/dev/raid/speed_limit_max<br />
<br />
Then check out the syncing speed and estimated finish time.<br />
<br />
{{hc|# cat /proc/mdstat|<nowiki><br />
Personalities : [raid1]<br />
md0 : active raid1 sda3[2] sdb3[1]<br />
155042219 blocks super 1.2 [2/1] [_U]<br />
[>....................] recovery = 1.3% (2136640/155042219) finish=158.2min speed=16102K/sec<br />
<br />
unused devices: <none><br />
</nowiki>}}<br />
<br />
See also [[sysctl#MDADM]].<br />
<br />
== Monitoring ==<br />
<br />
A simple one-liner that prints out the status of the RAID devices:<br />
<br />
{{hc|# awk '/^md/ {printf "%s: ", $1}; /blocks/ {print $NF}' </proc/mdstat<br />
|md1: [UU]<br />
md0: [UU]<br />
}}<br />
<br />
=== Watch mdstat ===<br />
<br />
# watch -t 'cat /proc/mdstat'<br />
<br />
Or preferably using {{pkg|tmux}}<br />
<br />
# tmux split-window -l 12 "watch -t 'cat /proc/mdstat'"<br />
<br />
=== Track IO with iotop ===<br />
<br />
The {{pkg|iotop}} package displays the input/output stats for processes. Use this command to view the IO for raid threads.<br />
<br />
# iotop -a -p $(sed 's, , -p ,g' <<<`pgrep "_raid|_resync|jbd2"`)<br />
<br />
=== Track IO with iostat ===<br />
<br />
The ''iostat'' utility from {{Pkg|sysstat}} package displays the input/output statistics for devices and partitions.<br />
<br />
# iostat -dmy 1 /dev/md0<br />
# iostat -dmy 1 # all<br />
<br />
=== Mailing on events ===<br />
<br />
A smtp mail server (sendmail) or at least an email forwarder (ssmtp/msmtp) is required to accomplish this. Perhaps the most simplistic solution is to use {{AUR|dma}} which is very tiny (installs to 0.08 MiB) and requires no setup.<br />
<br />
Edit {{ic|/etc/mdadm.conf}} defining the email address to which notifications will be received.<br />
<br />
{{Note|If using dma as mentioned above, users may simply mail directly to the username on the localhost rather than to an external email address.}}<br />
<br />
To test the configuration:<br />
<br />
# mdadm --monitor --scan --oneshot --test<br />
<br />
mdadm includes {{ic|mdmonitor.service}} to perform the monitoring task, so at this point, you have nothing left to do. If you do not set a mail address in {{ic|/etc/mdadm.conf}}, that service will fail. If you do not want to receive mail on mdadm events, the failure can be ignored; if you do not want notifications and are sensitive about failure messages, you can [[mask]] the unit.<br />
<br />
==== Alternative method ====<br />
<br />
To avoid the installation of a smtp mail server or an email forwarder you can use the [[S-nail]] tool (do not forget to setup) already on your system.<br />
<br />
Create a file named {{ic|/etc/mdadm_warning.sh}}:<br />
<br />
{{hc|/etc/mdadm_warning.sh|2=<br />
#!/bin/bash<br />
event=$1<br />
device=$2<br />
<br />
echo " " | /usr/bin/mailx -s "$event on $device" '''destination@email.com'''<br />
}}<br />
<br />
And give it execution permissions {{ic|chmod +x /etc/mdadm_warning.sh}}<br />
<br />
Then add this to the mdadm.conf<br />
<br />
PROGRAM /etc/mdadm_warning.sh<br />
<br />
To test and enable use the same as in the previous method.<br />
<br />
== Troubleshooting ==<br />
<br />
If you are getting error when you reboot about "invalid raid superblock magic" and you have additional hard drives other than the ones you installed to, check that your hard drive order is correct. During installation, your RAID devices may be hdd, hde and hdf, but during boot they may be hda, hdb and hdc. Adjust your kernel line accordingly. This is what happened to me anyway.<br />
<br />
=== Error: "kernel: ataX.00: revalidation failed" ===<br />
<br />
If you suddenly (after reboot, changed BIOS settings) experience Error messages like:<br />
<br />
Feb 9 08:15:46 hostserver kernel: ata8.00: revalidation failed (errno=-5)<br />
<br />
Is does not necessarily mean that a drive is broken. You often find panic links on the web which go for the worst. In a word, No Panic. Maybe you just changed APIC or ACPI settings within your BIOS or Kernel parameters somehow. Change them back and you should be fine. Ordinarily, turning ACPI and/orACPI off should help.<br />
<br />
=== Start arrays read-only ===<br />
<br />
When an md array is started, the superblock will be written, and resync may begin. To start read-only set the kernel module {{ic|md_mod}} parameter {{ic|start_ro}}. When this is set, new arrays get an 'auto-ro' mode, which disables all internal io (superblock updates, resync, recovery) and is automatically switched to 'rw' when the first write request arrives.<br />
<br />
{{Note|The array can be set to true 'ro' mode using {{ic|mdadm --readonly}} before the first write request, or resync can be started without a write using {{ic|mdadm --readwrite}}.}}<br />
<br />
To set the parameter at boot, add {{ic|1=md_mod.start_ro=1}} to your kernel line.<br />
<br />
Or set it at module load time from {{ic|/etc/modprobe.d/}} file or from directly from {{ic|/sys/}}:<br />
<br />
# echo 1 > /sys/module/md_mod/parameters/start_ro<br />
<br />
=== Recovering from a broken or missing drive in the raid ===<br />
<br />
You might get the above mentioned error also when one of the drives breaks for whatever reason. In that case you will have to force the raid to still turn on even with one disk short. Type this (change where needed):<br />
<br />
# mdadm --manage /dev/md0 --run<br />
<br />
Now you should be able to mount it again with something like this (if you had it in fstab):<br />
<br />
# mount /dev/md0<br />
<br />
Now the raid should be working again and available to use, however with one disk short! So, to add that one disc partition it the way like described above in [[#Prepare the devices]]. Once that is done you can add the new disk to the raid by doing:<br />
<br />
# mdadm --manage --add /dev/md0 /dev/sdd1<br />
<br />
If you type:<br />
<br />
# cat /proc/mdstat<br />
<br />
you probably see that the raid is now active and rebuilding.<br />
<br />
You also might want to update your configuration (see: [[#Update configuration file]]).<br />
<br />
== Benchmarking ==<br />
<br />
There are several tools for benchmarking a RAID. The most notable improvement is the speed increase when multiple threads are reading from the same RAID volume.<br />
<br />
{{AUR|tiobench}} specifically benchmarks these performance improvements by measuring fully-threaded I/O on the disk.<br />
<br />
{{Pkg|bonnie++}} tests database type access to one or more files, and creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format e-mail. The enclosed [http://www.coker.com.au/bonnie++/zcav/ ZCAV] program tests the performance of different zones of a hard drive without writing any data to the disk.<br />
<br />
{{ic|hdparm}} should '''NOT''' be used to benchmark a RAID, because it provides very inconsistent results.<br />
<br />
== See also ==<br />
<br />
{{Out of date|A lot of old and dead links.}}<br />
<br />
* [http://www.gentoo.org/doc/en/articles/software-raid-p1.xml Software RAID in the new Linux 2.4 kernel, Part 1]{{Dead link|2018|03|10}} and [http://www.gentoo.org/doc/en/articles/software-raid-p2.xml Part 2]{{Dead link|2018|03|10}} in the Gentoo Linux Docs<br />
* [http://raid.wiki.kernel.org/index.php/Linux_Raid Linux RAID wiki entry] on The Linux Kernel Archives<br />
* [https://raid.wiki.kernel.org/index.php/Write-intent_bitmap How Bitmaps Work]<br />
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-raid.html Chapter 15: Redundant Array of Independent Disks (RAID)] of Red Hat Enterprise Linux 6 Documentation<br />
* [http://tldp.org/FAQ/Linux-RAID-FAQ/x37.html Linux-RAID FAQ] on the Linux Documentation Project<br />
* [http://support.dell.com/support/topics/global.aspx/support/entvideos/raid?c=us&l=en&s=gen Dell.com Raid Tutorial]{{Dead link|2018|03|10}} - Interactive Walkthrough of Raid<br />
* [http://www.miracleas.com/BAARF/ BAARF]{{Dead link|2018|03|10}} including ''[http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt Why should I not use RAID 5?]''{{Dead link|2018|03|10}} by Art S. Kagel<br />
* [http://www.linux-mag.com/id/7924/ Introduction to RAID], [http://www.linux-mag.com/id/7931/ Nested-RAID: RAID-5 and RAID-6 Based Configurations], [http://www.linux-mag.com/id/7928/ Intro to Nested-RAID: RAID-01 and RAID-10], and [http://www.linux-mag.com/id/7932/ Nested-RAID: The Triple Lindy] in Linux Magazine<br />
* [http://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html HowTo: Speed Up Linux Software Raid Building And Re-syncing]<br />
* [http://fomori.org/blog/?p=94 RAID5-Server to hold all your data]<br />
* [[Wikipedia:Non-RAID drive architectures]]<br />
<br />
'''mdadm'''<br />
* [http://anonscm.debian.org/gitweb/?p=pkg-mdadm/mdadm.git;a=blob_plain;f=debian/FAQ;hb=HEAD Debian mdadm FAQ]<br />
* [http://www.kernel.org/pub/linux/utils/raid/mdadm/ mdadm source code]<br />
* [http://www.linux-mag.com/id/7939/ Software RAID on Linux with mdadm] in Linux Magazine<br />
* [[Wikipedia:mdadm|Wikipedia - mdadm]]<br />
<br />
'''Forum threads'''<br />
* [http://forums.overclockers.com.au/showthread.php?t=865333 Raid Performance Improvements with bitmaps]<br />
* [https://bbs.archlinux.org/viewtopic.php?id=125445 GRUB and GRUB2]<br />
* [https://bbs.archlinux.org/viewtopic.php?id=123698 Can't install grub2 on software RAID]<br />
* [http://forums.gentoo.org/viewtopic-t-888624-start-0.html Use RAID metadata 1.2 in boot and root partition]<br />
<br />
'''RAID with encryption'''<br />
* [http://www.shimari.com/dm-crypt-on-raid/ Linux/Fedora: Encrypt /home and swap over RAID with dm-crypt] by Justin Wells</div>Veoxhttps://wiki.archlinux.org/index.php?title=Talk:OfflineIMAP&diff=494522Talk:OfflineIMAP2017-10-30T20:56:06Z<p>Veox: /* Updated section on starting as a systemd timer */ Remove section: references what's not there anymore.</p>
<hr />
<div>http://nicolas33.github.com/offlineimap/#signals says <br />
<br />
You can send SIGINT to OfflineIMAP using (~/.offlineimap/pid) to kill it. SIGUSR1 will force an immediate resync of all accounts. This will be ignored for all accounts for which a resync is already in progress.<br />
<br />
Can someone please explain how to send a signal to a file?<br />
<br />
: In that file is stored PID (process ID) of the current running offlineimap process. You can use this number as an argument to kill command, to send signals. Like this:<br />
: {{ic|kill -s SIGUSR1 `cat ~/.offlineimap/pid`}}<br />
: You can see more details [https://github.com/nicolas33/offlineimap/commit/e1fb9492f84538df698d6a2f1cfa2738929ed040 here], [[User:Nixie|Nixie]] 13:50, 7 February 2011 (EST)<br />
<br />
== Warnings ==<br />
Before (re)adding more warnings about offlineimaps alleged lack of stability and the potential to cataclysmically delete all your mail, please provide some examples of where this has actually happenend.<br />
I have been using it exclusively for the last 3 years on at least 3 machines and have experienced no issues. I can't find a single forum thread describing any such massive failures and the repeated attempts<br />
to include warnings on this page is starting to look like scaremongering.<br />
<br />
[[User:Jasonwryan|Jasonwryan]] ([[User talk:Jasonwryan|talk]]) 03:14, 6 March 2013 (UTC)</div>Veoxhttps://wiki.archlinux.org/index.php?title=OfflineIMAP&diff=494496OfflineIMAP2017-10-30T11:50:52Z<p>Veox: /* systemd service */ Re-introduce "accuracy" box.</p>
<hr />
<div>[[Category:Email clients]]<br />
[[ja:OfflineIMAP]]<br />
{{Related articles start}}<br />
{{Related|isync}}<br />
{{Related|notmuch}}<br />
{{Related|msmtp}}<br />
{{Related articles end}}<br />
<br />
[http://offlineimap.org/ OfflineIMAP] is a Python utility to sync mail from IMAP servers. It does not work with the POP3 protocol or mbox, and is usually paired with a MUA such as [[Mutt]].<br />
<br />
{{Note|[http://www.offlineimap.org/development/2015/10/08/imapfw-is-made-public.html imapfw] is intened to replace offlineimap in the future.}}<br />
<br />
== Installation ==<br />
<br />
Install {{pkg|offlineimap}}. For a development version, install {{AUR|offlineimap-git}}.<br />
<br />
== Configuration ==<br />
<br />
Offlineimap is distributed with two default configuration files, which are both located in {{ic|/usr/share/offlineimap/}}. {{ic|offlineimap.conf}} contains every setting and is thorougly documented. Alternatively, {{ic|offlineimap.conf.minimal}} is not commented and only contains a small number of settings (see: [[#Minimal|Minimal]]).<br />
<br />
Copy one of the default configuration files to {{ic|~/.offlineimaprc}}.<br />
<br />
{{note|Writing a comment after an option/value on the same line is invalid syntax, hence take care that comments are placed on their own separate line.}}<br />
<br />
=== Minimal ===<br />
<br />
The following file is a commented version of {{ic|offlineimap.conf.minimal}}.<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[general]<br />
# List of accounts to be synced, separated by a comma.<br />
accounts = main<br />
<br />
[Account main]<br />
# Identifier for the local repository; e.g. the maildir to be synced via IMAP.<br />
localrepository = main-local<br />
# Identifier for the remote repository; i.e. the actual IMAP, usually non-local.<br />
remoterepository = main-remote<br />
<br />
[Repository main-local]<br />
# OfflineIMAP supports Maildir, GmailMaildir, and IMAP for local repositories.<br />
type = Maildir<br />
# Where should the mail be placed?<br />
localfolders = ~/mail<br />
<br />
[Repository main-remote]<br />
# Remote repos can be IMAP or Gmail, the latter being a preconfigured IMAP.<br />
type = IMAP<br />
remotehost = host.domain.tld<br />
remoteuser = username<br />
</nowiki>}}<br />
<br />
=== Selective folder synchronization ===<br />
<br />
For synchronizing only certain folders, you can use a [http://offlineimap.org/doc/nametrans.html#folderfilter folderfilter] in the '''remote''' section of the account in {{ic|~/.offlineimaprc}}. For example, the following configuration will only synchronize the folders {{ic|Inbox}} and {{ic|Sent}}:<br />
<br />
{{hc|~/.offlineimaprc|2=<br />
[Repository main-remote]<br />
# Synchronize only the folders Inbox and Sent:<br />
folderfilter = lambda foldername: foldername in ["Inbox", "Sent"]<br />
...<br />
}}<br />
<br />
For more options, see the [http://offlineimap.org/doc/nametrans.html#folderfilter official documentation].<br />
<br />
== Usage ==<br />
<br />
Before running offlineimap, create any parent directories that were allocated to local repositories:<br />
$ mkdir ~/mail<br />
<br />
Now, run the program:<br />
$ offlineimap<br />
<br />
Mail accounts will now be synced. If anything goes wrong, take a closer look at the error messages. OfflineIMAP is usually very verbose about problems; partly because the developers did not bother with taking away tracebacks from the final product.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Running offlineimap in the background ===<br />
<br />
Most other mail transfer agents assume that the user will be using the tool as a [[daemon]] by making the program sync periodically by default. In offlineimap, there are a few settings that control backgrounded tasks.<br />
<br />
Confusingly, they are spread thin all-over the configuration file:<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
# In the general section<br />
[general]<br />
# Controls how many accounts may be synced simultaneously<br />
maxsyncaccounts = 1<br />
<br />
# In the account identifier<br />
[Account main]<br />
# Minutes between syncs<br />
autorefresh = 0.5<br />
# Quick-syncs do not update if the only changes were to IMAP flags.<br />
# autorefresh=0.5 together with quick=10 yields<br />
# 10 quick refreshes between each full refresh, with 0.5 minutes between every <br />
# refresh, regardless of type.<br />
quick = 10<br />
<br />
# In the remote repository identifier<br />
[Repository main-remote]<br />
# Instead of closing the connection once a sync is complete, offlineimap will<br />
# send empty data to the server to hold the connection open. A value of 60<br />
# attempts to hold the connection for a minute between syncs (both quick and<br />
# autorefresh).This setting has no effect if autorefresh and holdconnectionopen<br />
# are not both set.<br />
keepalive = 60<br />
# OfflineIMAP normally closes IMAP server connections between refreshes if<br />
# the global option autorefresh is specified. If you wish it to keep the<br />
# connection open, set this to true. This setting has no effect if autorefresh<br />
# is not set.<br />
holdconnectionopen = yes<br />
</nowiki>}}<br />
<br />
==== systemd service ====<br />
<br />
Instead of setting OfflineIMAP as a daemon, it can be managed with the packages's provided [[systemd/User]] timer. To use it, [[start/enable]] the user timer {{ic|offlineimap.timer}} using the {{ic|--user}} flag:<br />
<br />
$ systemctl --user enable offlineimap-oneshot.timer<br />
$ systemctl --user start offlineimap-oneshot.timer<br />
<br />
This timer by default runs OfflineIMAP every 15 minutes. This can be easily changed by creating a [[drop-in snippet]]:<br />
<br />
$ systemctl --user edit offlineimap-oneshot.timer<br />
<br />
For example, the following modifies the timer to check every 5 minutes:<br />
<br />
{{hc|~/.config/systemd/user/offlineimap.timer.d/timer.conf|<nowiki><br />
[Timer]<br />
OnUnitInactiveSec=5m<br />
</nowiki>}}<br />
<br />
{{Note|The default package-provided {{ic|offlineimap-oneshot.service}} specifies {{ic|<nowiki>TimeoutStopSec=120</nowiki>}}, meaning that the timer-started service run will be terminated after two minutes. If syncing regularly takes longer than two minutes, or the service should be run more frequently than every two minutes, a drop-in snippet may also be required for {{ic|offlineimap-oneshot.service}}, appropriately modifying {{ic|TimeoutStopSec}}.}}<br />
<br />
{{Accuracy|Needs factual demonstration.|Talk:OfflineIMAP#Warnings}}<br />
<br />
{{Warning|Forced termination can lead to inconsistencies in OfflineIMAP's local database.}}<br />
<br />
=== Automatic mailbox generation for mutt ===<br />
<br />
[[Mutt]] cannot be simply pointed to an IMAP or maildir directory and be expected to guess which subdirectories happen to be the mailboxes, yet offlineimap can generate a muttrc fragment containing the mailboxes that it syncs.<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[mbnames]<br />
enabled = yes<br />
filename = ~/.mutt/mailboxes<br />
header = "mailboxes "<br />
peritem = "+%(accountname)s/%(foldername)s"<br />
sep = " "<br />
footer = "\n"<br />
</nowiki>}}<br />
<br />
Then add the following lines to {{ic|~/.mutt/muttrc}}.<br />
<br />
{{hc|~/.mutt/muttrc|<nowiki><br />
# IMAP: offlineimap<br />
set folder = "~/mail"<br />
source ~/.mutt/mailboxes<br />
set spoolfile = "+account/INBOX"<br />
set record = "+account/Sent\ Items"<br />
set postponed = "+account/Drafts"<br />
</nowiki>}}<br />
<br />
{{ic|account}} is the name you have given to your IMAP account in {{ic|~/.offlineimaprc}}.<br />
<br />
=== Gmail configuration ===<br />
<br />
This remote repository is configured specifically for Gmail support, substituting folder names in uppercase for lowercase, among other small additions. Keep in mind that this configuration does not sync the ''All Mail'' folder, since it is usually unnecessary and skipping it prevents bandwidth costs:<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[Repository gmail-remote]<br />
type = Gmail<br />
remoteuser = user@gmail.com<br />
remotepass = password<br />
nametrans = lambda foldername: re.sub ('^\[gmail\]', 'bak',<br />
re.sub ('sent_mail', 'sent',<br />
re.sub ('starred', 'flagged',<br />
re.sub (' ', '_', foldername.lower()))))<br />
folderfilter = lambda foldername: foldername not in ['[Gmail]/All Mail']<br />
# Necessary as of OfflineIMAP 6.5.4<br />
sslcacertfile = /etc/ssl/certs/ca-certificates.crt<br />
</nowiki>}}<br />
<br />
{{Note|<br />
* If you have Gmail set to another language, the folder names may appear translated too, e.g. "verzonden_berichten" instead of "sent_mail".<br />
* After version 6.3.5, offlineimap also creates remote folders to match your local ones. Thus you may need a nametrans rule for your local repository too that reverses the effects of this nametrans rule. If you don't want to make a reverse nametrans rule, you can disable remote folder creation by putting this in your remote configuration: {{ic|<nowiki>createfolders = False</nowiki>}}<br />
* As of 1 October 2012 gmail SSL certificate fingerprint is not always the same. This prevents from using {{ic|cert_fingerprint}} and makes the {{ic|sslcacertfile}} way a better solution for the SSL verification (see [[#SSL fingerprint does not match]]).<br />
}}<br />
<br />
=== Password management ===<br />
<br />
==== .netrc ====<br />
<br />
Add the following lines to your {{ic|~/.netrc}}:<br />
<br />
machine hostname.tld<br />
login [your username]<br />
password [your password]<br />
<br />
Do not forget to give the file appropriate rights like 600 or 700:<br />
$ chmod 600 ~/.netrc<br />
<br />
==== Using GPG ====<br />
<br />
GNU Privacy Guard can be used for storing a password in an encrypted file. First set up [[GnuPG]] and then follow the steps in this section. It is assumed that you can use your GPG private key [[GnuPG#gpg-agent|without entering a password]] all the time.<br />
<br />
First type in the password for the email account in a plain text file. Do this in a secure directory with {{ic|700}} permissions located on a [[tmpfs]] to avoid writing the unencrypted password to the disk. Then [[encrypt]] the file with GnuPG setting yourself as the recipient.<br />
<br />
Remove the plain text file since it is no longer needed. Move the encrypted file to the final location, e.g. {{ic|~/.offlineimappass.gpg}}.<br />
<br />
Now create a python function that will decrypt the password:<br />
<br />
{{hc|~/.offlineimap.py|2=<br />
#! /usr/bin/env python2<br />
from subprocess import check_output<br />
<br />
def get_pass():<br />
return check_output("gpg -dq ~/.offlineimappass.gpg", shell=True).strip("\n")<br />
}}<br />
<br />
Load this file from {{ic|~/.offlineimaprc}} and specify the defined function:<br />
<br />
{{hc|~/.offlineimaprc|2=<br />
[general]<br />
# Path to file with arbitrary Python code to be loaded<br />
pythonfile = ~/.offlineimap.py<br />
...<br />
<br />
[Repository ''example'']<br />
# Decrypt and read the encrypted password<br />
remotepasseval = get_pass()<br />
...<br />
}}<br />
<br />
==== Using pass ====<br />
<br />
[[pass]] is a simple password manager from the command line based on GPG.<br />
<br />
First create a password for your email account(s):<br />
<br />
$ pass insert Mail/''account''<br />
<br />
Now create a python function that will decrypt the password:<br />
<br />
{{hc|~/.offlineimap.py|2=<br />
#! /usr/bin/env python2<br />
from subprocess import check_output<br />
<br />
<br />
def get_pass(account):<br />
return check_output("pass Mail/" + account, shell=True).splitlines()[0]<br />
}}<br />
<br />
This is an example for a multi-account setup. You can customize the argument to ''pass'' as defined previously.<br />
<br />
Load this file from {{ic|~/.offlineimaprc}} and specify the defined function: <br />
<br />
{{hc|~/.offlineimaprc|2=<br />
[general]<br />
# Path to file with arbitrary Python code to be loaded<br />
pythonfile = ~/.offlineimap.py<br />
...<br />
<br />
[Repository Gmail]<br />
# Decrypt and read the encrypted password<br />
remotepasseval = get_pass("Gmail")<br />
...<br />
}}<br />
<br />
==== Gnome keyring ====<br />
<br />
In configuration for remote repositories the remoteusereval/remotepasseval fields can be set to custom python code that evaluates to the username/password. The code can be a call to a function defined in a Python script pointed to by 'pythonfile' config field. Create {{ic|~/.offlineimap.py}} according to either of the two options below and use it in the configuration:<br />
<br />
{{bc|<nowiki><br />
[general]<br />
pythonfile = ~/.offlineimap.py<br />
<br />
[Repository examplerepo]<br />
type = IMAP<br />
remotehost = mail.example.com<br />
remoteusereval = get_username("examplerepo")<br />
remotepasseval = get_password("examplerepo")<br />
</nowiki>}}<br />
<br />
===== Option 1: using gnomekeyring Python module =====<br />
Install {{pkg|python2-gnomekeyring}}. Then:<br />
<br />
{{hc|~/.offlineimap.py|<nowiki><br />
#! /usr/bin/env python2<br />
<br />
import gnomekeyring as gkey<br />
<br />
def set_credentials(repo, user, pw):<br />
KEYRING_NAME = "offlineimap"<br />
attrs = { "repo": repo, "user": user }<br />
keyring = gkey.get_default_keyring_sync()<br />
gkey.item_create_sync(keyring, gkey.ITEM_NETWORK_PASSWORD,<br />
KEYRING_NAME, attrs, pw, True)<br />
<br />
def get_credentials(repo):<br />
keyring = gkey.get_default_keyring_sync()<br />
attrs = {"repo": repo}<br />
items = gkey.find_items_sync(gkey.ITEM_NETWORK_PASSWORD, attrs)<br />
return (items[0].attributes["user"], items[0].secret)<br />
<br />
def get_username(repo):<br />
return get_credentials(repo)[0]<br />
def get_password(repo):<br />
return get_credentials(repo)[1]<br />
<br />
if __name__ == "__main__":<br />
import sys<br />
import os<br />
import getpass<br />
if len(sys.argv) != 3:<br />
print "Usage: %s <repository> <username>" \<br />
% (os.path.basename(sys.argv[0]))<br />
sys.exit(0)<br />
repo, username = sys.argv[1:]<br />
password = getpass.getpass("Enter password for user '%s': " % username)<br />
password_confirmation = getpass.getpass("Confirm password: ")<br />
if password != password_confirmation:<br />
print "Error: password confirmation does not match"<br />
sys.exit(1)<br />
set_credentials(repo, username, password)<br />
</nowiki>}}<br />
<br />
To set the credentials, run this script from a shell.<br />
<br />
===== Option 2: using {{AUR|gnome-keyring-query}} tool =====<br />
<br />
{{hc|~/.offlineimap.py|<nowiki><br />
#! /usr/bin/env python2<br />
# executes gnome-keyring-query get passwd<br />
# and returns the output<br />
<br />
import locale<br />
from subprocess import Popen, PIPE<br />
<br />
encoding = locale.getdefaultlocale()[1]<br />
<br />
def get_password(p):<br />
(out, err) = Popen(["gnome-keyring-query", "get", p], stdout=PIPE).communicate()<br />
return out.decode(encoding).strip()<br />
</nowiki>}}<br />
<br />
==== python2-keyring ====<br />
<br />
There is a general solution that should work for any keyring. Install [http://pypi.python.org/pypi/keyring python2-keyring] from [[AUR]], then change your ~/.offlineimaprc to say something like:<br />
<br />
{{bc|<nowiki><br />
[general]<br />
pythonfile = /home/user/offlineimap.py<br />
...<br />
[Repository RemoteEmail]<br />
remoteuser = username@host.net<br />
remotepasseval = keyring.get_password("offlineimap","username@host.net")<br />
...<br />
</nowiki>}}<br />
<br />
and somewhere in ~/offlineimap.py add {{ic|import keyring}}. Now all you have to do is set your password, like so:<br />
<br />
{{bc|$ python2 <br />
>>> import keyring<br />
>>> keyring.set_password("offlineimap","username@host.net", "MYPASSWORD")}}<br />
<br />
and it will grab the password from your (kwallet/gnome-) keyring instead of having to keep it in plaintext or enter it each time.<br />
<br />
==== Emacs EasyPG ====<br />
<br />
See http://www.emacswiki.org/emacs/OfflineIMAP#toc2<br />
<br />
==== KeePass / KeePassX ====<br />
<br />
Install {{AUR|python2-libkeepass}} from the AUR, then add the following to your offlineimap.py file:<br />
<br />
{{bc|<nowiki><br />
#! /usr/bin/env python2<br />
import os, getpass<br />
import libkeepass<br />
<br />
def get_keepass_pw(dbpath, title="", username=""):<br />
if os.path.isfile(dbpath):<br />
with libkeepass.open(<br />
os.path.expanduser(dbpath),<br />
password=getpass.getpass("Master password for '" + dbpath + "': ")) as kdb:<br />
entry = kdb.tree.xpath(<br />
'.//Entry'<br />
'/String/Key[.="Title"]/../Value[.="{title}"]/../..'<br />
'/String/Key[.="UserName"]/../Value[.="{username}"]/../..'<br />
'/String/Key[.="Password"]/../Value'.format(<br />
title=title,<br />
username=username<br />
)<br />
)[0]<br />
return entry.text<br />
else:<br />
print "Error: '" + dbpath + "' does not exist."<br />
return<br />
</nowiki>}}<br />
<br />
Next, edit your ~/.offlineimaprc:<br />
<br />
{{bc|<nowiki><br />
[general]<br />
# VVV Set this path correctly VVV<br />
pythonfile = /home/user/offlineimap.py<br />
...<br />
[Repository RemoteEmail]<br />
remoteuser = username@host.net<br />
# Set the DB path as well as the title and username of the specific entry you'd like to use.<br />
# This will prompt you on STDIN at runtime for the kdb master password.<br />
remotepasseval = get_keepass_pw("/path/to/database.kdb", title="<entry title>", username="<entry username>")<br />
...<br />
</nowiki>}}<br />
<br />
Note that as-is, this does not support KDBs with keyfiles, only KDBs with password-only auth.<br />
<br />
=====Old kdb format =====<br />
If your key database is stored in an old format, you the xpath strings may not be correct. This method should work in that case, but it's not compatible with the current default format (v4)<br />
<br />
Install {{AUR|python2-keepass-git}} from the AUR, then add the following to your offlineimap.py file:<br />
<br />
{{bc|<nowiki><br />
#! /usr/bin/env python2<br />
import os, getpass<br />
from keepass import kpdb<br />
<br />
def get_keepass_pw(dbpath, title="", username=""):<br />
if os.path.isfile(dbpath):<br />
db = kpdb.Database(dbpath, getpass.getpass("Master password for '" + dbpath + "': "))<br />
for entry in db.entries:<br />
if (entry.title == title) and (entry.username == username):<br />
return entry.password<br />
else:<br />
print "Error: '" + dbpath + "' does not exist."<br />
return<br />
<br />
</nowiki>}}<br />
<br />
=== Kerberos authentication ===<br />
<br />
Install {{AUR|python2-kerberos}} from [[AUR]] and do not specify remotepass in your .offlineimaprc. <br />
OfflineImap figure out the reset all if have a valid Kerberos TGT. <br />
If you have 'maxconnections', it will fail for some connection. Comment 'maxconnections' out will solve this problem.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Overriding UI and autorefresh settings ===<br />
<br />
For the sake of troubleshooting, it is sometimes convenient to launch offlineimap with a more verbose UI, no background syncs and perhaps even a debug level:<br />
$ offlineimap [ -o ] [ -d <debug_type> ] [ -u <ui> ]<br />
;-o<br />
:Disable autorefresh, keepalive, etc.<br />
<br />
;-d <debug_type><br />
:Where ''<debug_type>'' is one of {{Ic|imap}}, {{Ic|maildir}} or {{Ic|thread}}. Debugging imap and maildir are, by far, the most useful.<br />
<br />
;-u <ui><br />
:Where ''<ui>'' is one of {{Ic|CURSES.BLINKENLIGHTS}}, {{Ic|TTY.TTYUI}}, {{Ic|NONINTERACTIVE.BASIC}}, {{Ic|NONINTERACTIVE.QUIET}} or {{Ic|MACHINE.MACHINEUI}}. TTY.TTYUI is sufficient for debugging purposes.<br />
<br />
{{Note|More recent versions use the following for <ui>: {{Ic|blinkenlights}}, {{Ic|ttyui}}, {{Ic|basic}}, {{Ic|quiet}} or {{Ic|machineui}}.}}<br />
<br />
=== Folder could not be created ===<br />
<br />
In version 6.5.3, offlineimap gained the ability to create folders in the remote repository, as described [http://comments.gmane.org/gmane.mail.imap.offlineimap.general/4784 here].<br />
<br />
This can lead to errors of the following form when using {{Ic|nametrans}} on the remote repository:<br />
ERROR: Creating folder bar on repository foo-remote<br />
Folder 'bar'[foo-remote] could not be created. Server responded: ('NO', ['[ALREADYEXISTS] Duplicate folder name bar (Failure)'])<br />
<br />
The solution is to provide an inverse {{Ic|nametrans}} lambda for the local repository, e.g.<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[Repository foo-local]<br />
nametrans = lambda foldername: foldername.replace('bar', 'BAR')<br />
<br />
[Repository foo-remote]<br />
nametrans = lambda foldername: foldername.replace('BAR', 'bar')<br />
</nowiki>}}<br />
<br />
* For working out the correct inverse mapping. the output of {{Ic|offlineimap --info}} should help.<br />
* After updating the mapping, it may be necessary to remove all of the folders under {{Ic|$HOME/.offlineimap/}} for the affected accounts.<br />
<br />
=== SSL fingerprint does not match ===<br />
<br />
ERROR: Server SSL fingerprint 'keykeykey' for hostname 'example.com' does not match configured fingerprint. Please verify and set 'cert_fingerprint' accordingly if not set yet.<br />
<br />
To solve this, add to {{ic|~/.offlineimaprc}} (in the same section as {{ic|1=ssl = yes}}) one of the following:<br />
* either add {{ic|cert_fingerprint}}, with the certificate fingerprint of the remote server. This checks whether the remote server certificate matches the given fingerprint.<br />
cert_fingerprint = keykeykey<br />
* or add {{ic|sslcacertfile}} with the path to the system CA certificates file. Needs {{Pkg|ca-certificates}} installed. This validates the remote ssl certificate chain against the Certification Authorities in that file.<br />
sslcacertfile = /etc/ssl/certs/ca-certificates.crt<br />
<br />
=== Copying message, connection closed ===<br />
<br />
ERROR: Copying message -2 [acc: email] connection closed<br />
Folder sent [acc: email]: ERROR: while syncing sent [account email] connection closed<br />
<br />
Cause of this can be creation of same message both locally and on server. This happens if your email provider automatically saves sent mails to same folder as your local client. If you encounter this, disable save of sent messages in your local client.<br />
<br />
== See also ==<br />
<br />
* [http://lists.alioth.debian.org/mailman/listinfo/offlineimap-project Official OfflineIMAP mailing list]<br />
* [http://pbrisbin.com/posts/mutt_gmail_offlineimap/ Mutt + Gmail + Offlineimap] - An outline of brisbin's simple gmail/mutt setup using cron to keep offlineimap syncing.</div>Veoxhttps://wiki.archlinux.org/index.php?title=Talk:OfflineIMAP&diff=494495Talk:OfflineIMAP2017-10-30T11:45:17Z<p>Veox: /* Updated section on starting as a systemd timer */Clarify in last edit.</p>
<hr />
<div>http://nicolas33.github.com/offlineimap/#signals says <br />
<br />
You can send SIGINT to OfflineIMAP using (~/.offlineimap/pid) to kill it. SIGUSR1 will force an immediate resync of all accounts. This will be ignored for all accounts for which a resync is already in progress.<br />
<br />
Can someone please explain how to send a signal to a file?<br />
<br />
: In that file is stored PID (process ID) of the current running offlineimap process. You can use this number as an argument to kill command, to send signals. Like this:<br />
: {{ic|kill -s SIGUSR1 `cat ~/.offlineimap/pid`}}<br />
: You can see more details [https://github.com/nicolas33/offlineimap/commit/e1fb9492f84538df698d6a2f1cfa2738929ed040 here], [[User:Nixie|Nixie]] 13:50, 7 February 2011 (EST)<br />
<br />
== Warnings ==<br />
Before (re)adding more warnings about offlineimaps alleged lack of stability and the potential to cataclysmically delete all your mail, please provide some examples of where this has actually happenend.<br />
I have been using it exclusively for the last 3 years on at least 3 machines and have experienced no issues. I can't find a single forum thread describing any such massive failures and the repeated attempts<br />
to include warnings on this page is starting to look like scaremongering.<br />
<br />
[[User:Jasonwryan|Jasonwryan]] ([[User talk:Jasonwryan|talk]]) 03:14, 6 March 2013 (UTC)<br />
<br />
== Updated section on starting as a systemd timer ==<br />
<br />
The section is still called "systemd service", not "systemd timer", but I'm reluctant to dig in and find out how to change this without introducing link-rot.<br />
<br />
Also, the procedure as described there "works for me", and is described on [http://www.offlineimap.org/doc/contrib/systemd.html offlineimap's own contrib doc], but it'd be nice to get confirmation.<br />
<br />
There was a warning on "possible database inconsistency" when the service is terminated by systemd on timeout, but I haven't checked which signal is used to do that. The warning may be bogus - see [[#Warnings]] topic above.<br />
<br />
-- [[User:Veox|Veox]] ([[User talk:Veox|talk]]) 11:43, 30 October 2017 (UTC)</div>Veoxhttps://wiki.archlinux.org/index.php?title=Talk:OfflineIMAP&diff=494494Talk:OfflineIMAP2017-10-30T11:43:45Z<p>Veox: /* Updated section on starting as a systemd timer */ new section</p>
<hr />
<div>http://nicolas33.github.com/offlineimap/#signals says <br />
<br />
You can send SIGINT to OfflineIMAP using (~/.offlineimap/pid) to kill it. SIGUSR1 will force an immediate resync of all accounts. This will be ignored for all accounts for which a resync is already in progress.<br />
<br />
Can someone please explain how to send a signal to a file?<br />
<br />
: In that file is stored PID (process ID) of the current running offlineimap process. You can use this number as an argument to kill command, to send signals. Like this:<br />
: {{ic|kill -s SIGUSR1 `cat ~/.offlineimap/pid`}}<br />
: You can see more details [https://github.com/nicolas33/offlineimap/commit/e1fb9492f84538df698d6a2f1cfa2738929ed040 here], [[User:Nixie|Nixie]] 13:50, 7 February 2011 (EST)<br />
<br />
== Warnings ==<br />
Before (re)adding more warnings about offlineimaps alleged lack of stability and the potential to cataclysmically delete all your mail, please provide some examples of where this has actually happenend.<br />
I have been using it exclusively for the last 3 years on at least 3 machines and have experienced no issues. I can't find a single forum thread describing any such massive failures and the repeated attempts<br />
to include warnings on this page is starting to look like scaremongering.<br />
<br />
[[User:Jasonwryan|Jasonwryan]] ([[User talk:Jasonwryan|talk]]) 03:14, 6 March 2013 (UTC)<br />
<br />
== Updated section on starting as a systemd timer ==<br />
<br />
The section is still called "systemd service", not "systemd timer", but I'm reluctant to dig in and find out how to change this without introducing link-rot.<br />
<br />
Also, the procedure as described there "works for me", and is described on [http://www.offlineimap.org/doc/contrib/systemd.html offlineimap's own contrib doc], but it'd be nice to get confirmation.<br />
<br />
There was a warning on "possible database inconsistency" when the service is terminated by systemd on timeout, but I haven't checked which signal is used to do that. (See [[#Warnings]] topic above.)<br />
<br />
-- [[User:Veox|Veox]] ([[User talk:Veox|talk]]) 11:43, 30 October 2017 (UTC)</div>Veoxhttps://wiki.archlinux.org/index.php?title=OfflineIMAP&diff=494493OfflineIMAP2017-10-30T11:33:57Z<p>Veox: /* systemd service */ Update to current systemd timer integration.</p>
<hr />
<div>[[Category:Email clients]]<br />
[[ja:OfflineIMAP]]<br />
{{Related articles start}}<br />
{{Related|isync}}<br />
{{Related|notmuch}}<br />
{{Related|msmtp}}<br />
{{Related articles end}}<br />
<br />
[http://offlineimap.org/ OfflineIMAP] is a Python utility to sync mail from IMAP servers. It does not work with the POP3 protocol or mbox, and is usually paired with a MUA such as [[Mutt]].<br />
<br />
{{Note|[http://www.offlineimap.org/development/2015/10/08/imapfw-is-made-public.html imapfw] is intened to replace offlineimap in the future.}}<br />
<br />
== Installation ==<br />
<br />
Install {{pkg|offlineimap}}. For a development version, install {{AUR|offlineimap-git}}.<br />
<br />
== Configuration ==<br />
<br />
Offlineimap is distributed with two default configuration files, which are both located in {{ic|/usr/share/offlineimap/}}. {{ic|offlineimap.conf}} contains every setting and is thorougly documented. Alternatively, {{ic|offlineimap.conf.minimal}} is not commented and only contains a small number of settings (see: [[#Minimal|Minimal]]).<br />
<br />
Copy one of the default configuration files to {{ic|~/.offlineimaprc}}.<br />
<br />
{{note|Writing a comment after an option/value on the same line is invalid syntax, hence take care that comments are placed on their own separate line.}}<br />
<br />
=== Minimal ===<br />
<br />
The following file is a commented version of {{ic|offlineimap.conf.minimal}}.<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[general]<br />
# List of accounts to be synced, separated by a comma.<br />
accounts = main<br />
<br />
[Account main]<br />
# Identifier for the local repository; e.g. the maildir to be synced via IMAP.<br />
localrepository = main-local<br />
# Identifier for the remote repository; i.e. the actual IMAP, usually non-local.<br />
remoterepository = main-remote<br />
<br />
[Repository main-local]<br />
# OfflineIMAP supports Maildir, GmailMaildir, and IMAP for local repositories.<br />
type = Maildir<br />
# Where should the mail be placed?<br />
localfolders = ~/mail<br />
<br />
[Repository main-remote]<br />
# Remote repos can be IMAP or Gmail, the latter being a preconfigured IMAP.<br />
type = IMAP<br />
remotehost = host.domain.tld<br />
remoteuser = username<br />
</nowiki>}}<br />
<br />
=== Selective folder synchronization ===<br />
<br />
For synchronizing only certain folders, you can use a [http://offlineimap.org/doc/nametrans.html#folderfilter folderfilter] in the '''remote''' section of the account in {{ic|~/.offlineimaprc}}. For example, the following configuration will only synchronize the folders {{ic|Inbox}} and {{ic|Sent}}:<br />
<br />
{{hc|~/.offlineimaprc|2=<br />
[Repository main-remote]<br />
# Synchronize only the folders Inbox and Sent:<br />
folderfilter = lambda foldername: foldername in ["Inbox", "Sent"]<br />
...<br />
}}<br />
<br />
For more options, see the [http://offlineimap.org/doc/nametrans.html#folderfilter official documentation].<br />
<br />
== Usage ==<br />
<br />
Before running offlineimap, create any parent directories that were allocated to local repositories:<br />
$ mkdir ~/mail<br />
<br />
Now, run the program:<br />
$ offlineimap<br />
<br />
Mail accounts will now be synced. If anything goes wrong, take a closer look at the error messages. OfflineIMAP is usually very verbose about problems; partly because the developers did not bother with taking away tracebacks from the final product.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Running offlineimap in the background ===<br />
<br />
Most other mail transfer agents assume that the user will be using the tool as a [[daemon]] by making the program sync periodically by default. In offlineimap, there are a few settings that control backgrounded tasks.<br />
<br />
Confusingly, they are spread thin all-over the configuration file:<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
# In the general section<br />
[general]<br />
# Controls how many accounts may be synced simultaneously<br />
maxsyncaccounts = 1<br />
<br />
# In the account identifier<br />
[Account main]<br />
# Minutes between syncs<br />
autorefresh = 0.5<br />
# Quick-syncs do not update if the only changes were to IMAP flags.<br />
# autorefresh=0.5 together with quick=10 yields<br />
# 10 quick refreshes between each full refresh, with 0.5 minutes between every <br />
# refresh, regardless of type.<br />
quick = 10<br />
<br />
# In the remote repository identifier<br />
[Repository main-remote]<br />
# Instead of closing the connection once a sync is complete, offlineimap will<br />
# send empty data to the server to hold the connection open. A value of 60<br />
# attempts to hold the connection for a minute between syncs (both quick and<br />
# autorefresh).This setting has no effect if autorefresh and holdconnectionopen<br />
# are not both set.<br />
keepalive = 60<br />
# OfflineIMAP normally closes IMAP server connections between refreshes if<br />
# the global option autorefresh is specified. If you wish it to keep the<br />
# connection open, set this to true. This setting has no effect if autorefresh<br />
# is not set.<br />
holdconnectionopen = yes<br />
</nowiki>}}<br />
<br />
==== systemd service ====<br />
<br />
Instead of setting OfflineIMAP as a daemon, it can be managed with the packages's provided [[systemd/User]] timer. To use it, [[start/enable]] the user timer {{ic|offlineimap.timer}} using the {{ic|--user}} flag:<br />
<br />
$ systemctl --user enable offlineimap-oneshot.timer<br />
$ systemctl --user start offlineimap-oneshot.timer<br />
<br />
This timer by default runs OfflineIMAP every 15 minutes. This can be easily changed by creating a [[drop-in snippet]]:<br />
<br />
$ systemctl --user edit offlineimap-oneshot.timer<br />
<br />
For example, the following modifies the timer to check every 5 minutes:<br />
<br />
{{hc|~/.config/systemd/user/offlineimap.timer.d/timer.conf|<nowiki><br />
[Timer]<br />
OnUnitInactiveSec=5m<br />
</nowiki>}}<br />
<br />
{{Note|The default package-provided {{ic|offlineimap-oneshot.service}} specifies {{ic|<nowiki>TimeoutStopSec=120</nowiki>}}, meaning that the timer-started service run will be terminated after two minutes. If syncing regularly takes longer than two minutes, or the service should be run more frequently than every two minutes, a drop-in snippet may also be required for {{ic|offlineimap-oneshot.service}}, appropriately modifying {{ic|TimeoutStopSec}}.}}<br />
<br />
{{Warning|Forced termination can lead to inconsistencies in the OfflineIMAP's local database.}}<br />
<br />
=== Automatic mailbox generation for mutt ===<br />
<br />
[[Mutt]] cannot be simply pointed to an IMAP or maildir directory and be expected to guess which subdirectories happen to be the mailboxes, yet offlineimap can generate a muttrc fragment containing the mailboxes that it syncs.<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[mbnames]<br />
enabled = yes<br />
filename = ~/.mutt/mailboxes<br />
header = "mailboxes "<br />
peritem = "+%(accountname)s/%(foldername)s"<br />
sep = " "<br />
footer = "\n"<br />
</nowiki>}}<br />
<br />
Then add the following lines to {{ic|~/.mutt/muttrc}}.<br />
<br />
{{hc|~/.mutt/muttrc|<nowiki><br />
# IMAP: offlineimap<br />
set folder = "~/mail"<br />
source ~/.mutt/mailboxes<br />
set spoolfile = "+account/INBOX"<br />
set record = "+account/Sent\ Items"<br />
set postponed = "+account/Drafts"<br />
</nowiki>}}<br />
<br />
{{ic|account}} is the name you have given to your IMAP account in {{ic|~/.offlineimaprc}}.<br />
<br />
=== Gmail configuration ===<br />
<br />
This remote repository is configured specifically for Gmail support, substituting folder names in uppercase for lowercase, among other small additions. Keep in mind that this configuration does not sync the ''All Mail'' folder, since it is usually unnecessary and skipping it prevents bandwidth costs:<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[Repository gmail-remote]<br />
type = Gmail<br />
remoteuser = user@gmail.com<br />
remotepass = password<br />
nametrans = lambda foldername: re.sub ('^\[gmail\]', 'bak',<br />
re.sub ('sent_mail', 'sent',<br />
re.sub ('starred', 'flagged',<br />
re.sub (' ', '_', foldername.lower()))))<br />
folderfilter = lambda foldername: foldername not in ['[Gmail]/All Mail']<br />
# Necessary as of OfflineIMAP 6.5.4<br />
sslcacertfile = /etc/ssl/certs/ca-certificates.crt<br />
</nowiki>}}<br />
<br />
{{Note|<br />
* If you have Gmail set to another language, the folder names may appear translated too, e.g. "verzonden_berichten" instead of "sent_mail".<br />
* After version 6.3.5, offlineimap also creates remote folders to match your local ones. Thus you may need a nametrans rule for your local repository too that reverses the effects of this nametrans rule. If you don't want to make a reverse nametrans rule, you can disable remote folder creation by putting this in your remote configuration: {{ic|<nowiki>createfolders = False</nowiki>}}<br />
* As of 1 October 2012 gmail SSL certificate fingerprint is not always the same. This prevents from using {{ic|cert_fingerprint}} and makes the {{ic|sslcacertfile}} way a better solution for the SSL verification (see [[#SSL fingerprint does not match]]).<br />
}}<br />
<br />
=== Password management ===<br />
<br />
==== .netrc ====<br />
<br />
Add the following lines to your {{ic|~/.netrc}}:<br />
<br />
machine hostname.tld<br />
login [your username]<br />
password [your password]<br />
<br />
Do not forget to give the file appropriate rights like 600 or 700:<br />
$ chmod 600 ~/.netrc<br />
<br />
==== Using GPG ====<br />
<br />
GNU Privacy Guard can be used for storing a password in an encrypted file. First set up [[GnuPG]] and then follow the steps in this section. It is assumed that you can use your GPG private key [[GnuPG#gpg-agent|without entering a password]] all the time.<br />
<br />
First type in the password for the email account in a plain text file. Do this in a secure directory with {{ic|700}} permissions located on a [[tmpfs]] to avoid writing the unencrypted password to the disk. Then [[encrypt]] the file with GnuPG setting yourself as the recipient.<br />
<br />
Remove the plain text file since it is no longer needed. Move the encrypted file to the final location, e.g. {{ic|~/.offlineimappass.gpg}}.<br />
<br />
Now create a python function that will decrypt the password:<br />
<br />
{{hc|~/.offlineimap.py|2=<br />
#! /usr/bin/env python2<br />
from subprocess import check_output<br />
<br />
def get_pass():<br />
return check_output("gpg -dq ~/.offlineimappass.gpg", shell=True).strip("\n")<br />
}}<br />
<br />
Load this file from {{ic|~/.offlineimaprc}} and specify the defined function:<br />
<br />
{{hc|~/.offlineimaprc|2=<br />
[general]<br />
# Path to file with arbitrary Python code to be loaded<br />
pythonfile = ~/.offlineimap.py<br />
...<br />
<br />
[Repository ''example'']<br />
# Decrypt and read the encrypted password<br />
remotepasseval = get_pass()<br />
...<br />
}}<br />
<br />
==== Using pass ====<br />
<br />
[[pass]] is a simple password manager from the command line based on GPG.<br />
<br />
First create a password for your email account(s):<br />
<br />
$ pass insert Mail/''account''<br />
<br />
Now create a python function that will decrypt the password:<br />
<br />
{{hc|~/.offlineimap.py|2=<br />
#! /usr/bin/env python2<br />
from subprocess import check_output<br />
<br />
<br />
def get_pass(account):<br />
return check_output("pass Mail/" + account, shell=True).splitlines()[0]<br />
}}<br />
<br />
This is an example for a multi-account setup. You can customize the argument to ''pass'' as defined previously.<br />
<br />
Load this file from {{ic|~/.offlineimaprc}} and specify the defined function: <br />
<br />
{{hc|~/.offlineimaprc|2=<br />
[general]<br />
# Path to file with arbitrary Python code to be loaded<br />
pythonfile = ~/.offlineimap.py<br />
...<br />
<br />
[Repository Gmail]<br />
# Decrypt and read the encrypted password<br />
remotepasseval = get_pass("Gmail")<br />
...<br />
}}<br />
<br />
==== Gnome keyring ====<br />
<br />
In configuration for remote repositories the remoteusereval/remotepasseval fields can be set to custom python code that evaluates to the username/password. The code can be a call to a function defined in a Python script pointed to by 'pythonfile' config field. Create {{ic|~/.offlineimap.py}} according to either of the two options below and use it in the configuration:<br />
<br />
{{bc|<nowiki><br />
[general]<br />
pythonfile = ~/.offlineimap.py<br />
<br />
[Repository examplerepo]<br />
type = IMAP<br />
remotehost = mail.example.com<br />
remoteusereval = get_username("examplerepo")<br />
remotepasseval = get_password("examplerepo")<br />
</nowiki>}}<br />
<br />
===== Option 1: using gnomekeyring Python module =====<br />
Install {{pkg|python2-gnomekeyring}}. Then:<br />
<br />
{{hc|~/.offlineimap.py|<nowiki><br />
#! /usr/bin/env python2<br />
<br />
import gnomekeyring as gkey<br />
<br />
def set_credentials(repo, user, pw):<br />
KEYRING_NAME = "offlineimap"<br />
attrs = { "repo": repo, "user": user }<br />
keyring = gkey.get_default_keyring_sync()<br />
gkey.item_create_sync(keyring, gkey.ITEM_NETWORK_PASSWORD,<br />
KEYRING_NAME, attrs, pw, True)<br />
<br />
def get_credentials(repo):<br />
keyring = gkey.get_default_keyring_sync()<br />
attrs = {"repo": repo}<br />
items = gkey.find_items_sync(gkey.ITEM_NETWORK_PASSWORD, attrs)<br />
return (items[0].attributes["user"], items[0].secret)<br />
<br />
def get_username(repo):<br />
return get_credentials(repo)[0]<br />
def get_password(repo):<br />
return get_credentials(repo)[1]<br />
<br />
if __name__ == "__main__":<br />
import sys<br />
import os<br />
import getpass<br />
if len(sys.argv) != 3:<br />
print "Usage: %s <repository> <username>" \<br />
% (os.path.basename(sys.argv[0]))<br />
sys.exit(0)<br />
repo, username = sys.argv[1:]<br />
password = getpass.getpass("Enter password for user '%s': " % username)<br />
password_confirmation = getpass.getpass("Confirm password: ")<br />
if password != password_confirmation:<br />
print "Error: password confirmation does not match"<br />
sys.exit(1)<br />
set_credentials(repo, username, password)<br />
</nowiki>}}<br />
<br />
To set the credentials, run this script from a shell.<br />
<br />
===== Option 2: using {{AUR|gnome-keyring-query}} tool =====<br />
<br />
{{hc|~/.offlineimap.py|<nowiki><br />
#! /usr/bin/env python2<br />
# executes gnome-keyring-query get passwd<br />
# and returns the output<br />
<br />
import locale<br />
from subprocess import Popen, PIPE<br />
<br />
encoding = locale.getdefaultlocale()[1]<br />
<br />
def get_password(p):<br />
(out, err) = Popen(["gnome-keyring-query", "get", p], stdout=PIPE).communicate()<br />
return out.decode(encoding).strip()<br />
</nowiki>}}<br />
<br />
==== python2-keyring ====<br />
<br />
There is a general solution that should work for any keyring. Install [http://pypi.python.org/pypi/keyring python2-keyring] from [[AUR]], then change your ~/.offlineimaprc to say something like:<br />
<br />
{{bc|<nowiki><br />
[general]<br />
pythonfile = /home/user/offlineimap.py<br />
...<br />
[Repository RemoteEmail]<br />
remoteuser = username@host.net<br />
remotepasseval = keyring.get_password("offlineimap","username@host.net")<br />
...<br />
</nowiki>}}<br />
<br />
and somewhere in ~/offlineimap.py add {{ic|import keyring}}. Now all you have to do is set your password, like so:<br />
<br />
{{bc|$ python2 <br />
>>> import keyring<br />
>>> keyring.set_password("offlineimap","username@host.net", "MYPASSWORD")}}<br />
<br />
and it will grab the password from your (kwallet/gnome-) keyring instead of having to keep it in plaintext or enter it each time.<br />
<br />
==== Emacs EasyPG ====<br />
<br />
See http://www.emacswiki.org/emacs/OfflineIMAP#toc2<br />
<br />
==== KeePass / KeePassX ====<br />
<br />
Install {{AUR|python2-libkeepass}} from the AUR, then add the following to your offlineimap.py file:<br />
<br />
{{bc|<nowiki><br />
#! /usr/bin/env python2<br />
import os, getpass<br />
import libkeepass<br />
<br />
def get_keepass_pw(dbpath, title="", username=""):<br />
if os.path.isfile(dbpath):<br />
with libkeepass.open(<br />
os.path.expanduser(dbpath),<br />
password=getpass.getpass("Master password for '" + dbpath + "': ")) as kdb:<br />
entry = kdb.tree.xpath(<br />
'.//Entry'<br />
'/String/Key[.="Title"]/../Value[.="{title}"]/../..'<br />
'/String/Key[.="UserName"]/../Value[.="{username}"]/../..'<br />
'/String/Key[.="Password"]/../Value'.format(<br />
title=title,<br />
username=username<br />
)<br />
)[0]<br />
return entry.text<br />
else:<br />
print "Error: '" + dbpath + "' does not exist."<br />
return<br />
</nowiki>}}<br />
<br />
Next, edit your ~/.offlineimaprc:<br />
<br />
{{bc|<nowiki><br />
[general]<br />
# VVV Set this path correctly VVV<br />
pythonfile = /home/user/offlineimap.py<br />
...<br />
[Repository RemoteEmail]<br />
remoteuser = username@host.net<br />
# Set the DB path as well as the title and username of the specific entry you'd like to use.<br />
# This will prompt you on STDIN at runtime for the kdb master password.<br />
remotepasseval = get_keepass_pw("/path/to/database.kdb", title="<entry title>", username="<entry username>")<br />
...<br />
</nowiki>}}<br />
<br />
Note that as-is, this does not support KDBs with keyfiles, only KDBs with password-only auth.<br />
<br />
=====Old kdb format =====<br />
If your key database is stored in an old format, you the xpath strings may not be correct. This method should work in that case, but it's not compatible with the current default format (v4)<br />
<br />
Install {{AUR|python2-keepass-git}} from the AUR, then add the following to your offlineimap.py file:<br />
<br />
{{bc|<nowiki><br />
#! /usr/bin/env python2<br />
import os, getpass<br />
from keepass import kpdb<br />
<br />
def get_keepass_pw(dbpath, title="", username=""):<br />
if os.path.isfile(dbpath):<br />
db = kpdb.Database(dbpath, getpass.getpass("Master password for '" + dbpath + "': "))<br />
for entry in db.entries:<br />
if (entry.title == title) and (entry.username == username):<br />
return entry.password<br />
else:<br />
print "Error: '" + dbpath + "' does not exist."<br />
return<br />
<br />
</nowiki>}}<br />
<br />
=== Kerberos authentication ===<br />
<br />
Install {{AUR|python2-kerberos}} from [[AUR]] and do not specify remotepass in your .offlineimaprc. <br />
OfflineImap figure out the reset all if have a valid Kerberos TGT. <br />
If you have 'maxconnections', it will fail for some connection. Comment 'maxconnections' out will solve this problem.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Overriding UI and autorefresh settings ===<br />
<br />
For the sake of troubleshooting, it is sometimes convenient to launch offlineimap with a more verbose UI, no background syncs and perhaps even a debug level:<br />
$ offlineimap [ -o ] [ -d <debug_type> ] [ -u <ui> ]<br />
;-o<br />
:Disable autorefresh, keepalive, etc.<br />
<br />
;-d <debug_type><br />
:Where ''<debug_type>'' is one of {{Ic|imap}}, {{Ic|maildir}} or {{Ic|thread}}. Debugging imap and maildir are, by far, the most useful.<br />
<br />
;-u <ui><br />
:Where ''<ui>'' is one of {{Ic|CURSES.BLINKENLIGHTS}}, {{Ic|TTY.TTYUI}}, {{Ic|NONINTERACTIVE.BASIC}}, {{Ic|NONINTERACTIVE.QUIET}} or {{Ic|MACHINE.MACHINEUI}}. TTY.TTYUI is sufficient for debugging purposes.<br />
<br />
{{Note|More recent versions use the following for <ui>: {{Ic|blinkenlights}}, {{Ic|ttyui}}, {{Ic|basic}}, {{Ic|quiet}} or {{Ic|machineui}}.}}<br />
<br />
=== Folder could not be created ===<br />
<br />
In version 6.5.3, offlineimap gained the ability to create folders in the remote repository, as described [http://comments.gmane.org/gmane.mail.imap.offlineimap.general/4784 here].<br />
<br />
This can lead to errors of the following form when using {{Ic|nametrans}} on the remote repository:<br />
ERROR: Creating folder bar on repository foo-remote<br />
Folder 'bar'[foo-remote] could not be created. Server responded: ('NO', ['[ALREADYEXISTS] Duplicate folder name bar (Failure)'])<br />
<br />
The solution is to provide an inverse {{Ic|nametrans}} lambda for the local repository, e.g.<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[Repository foo-local]<br />
nametrans = lambda foldername: foldername.replace('bar', 'BAR')<br />
<br />
[Repository foo-remote]<br />
nametrans = lambda foldername: foldername.replace('BAR', 'bar')<br />
</nowiki>}}<br />
<br />
* For working out the correct inverse mapping. the output of {{Ic|offlineimap --info}} should help.<br />
* After updating the mapping, it may be necessary to remove all of the folders under {{Ic|$HOME/.offlineimap/}} for the affected accounts.<br />
<br />
=== SSL fingerprint does not match ===<br />
<br />
ERROR: Server SSL fingerprint 'keykeykey' for hostname 'example.com' does not match configured fingerprint. Please verify and set 'cert_fingerprint' accordingly if not set yet.<br />
<br />
To solve this, add to {{ic|~/.offlineimaprc}} (in the same section as {{ic|1=ssl = yes}}) one of the following:<br />
* either add {{ic|cert_fingerprint}}, with the certificate fingerprint of the remote server. This checks whether the remote server certificate matches the given fingerprint.<br />
cert_fingerprint = keykeykey<br />
* or add {{ic|sslcacertfile}} with the path to the system CA certificates file. Needs {{Pkg|ca-certificates}} installed. This validates the remote ssl certificate chain against the Certification Authorities in that file.<br />
sslcacertfile = /etc/ssl/certs/ca-certificates.crt<br />
<br />
=== Copying message, connection closed ===<br />
<br />
ERROR: Copying message -2 [acc: email] connection closed<br />
Folder sent [acc: email]: ERROR: while syncing sent [account email] connection closed<br />
<br />
Cause of this can be creation of same message both locally and on server. This happens if your email provider automatically saves sent mails to same folder as your local client. If you encounter this, disable save of sent messages in your local client.<br />
<br />
== See also ==<br />
<br />
* [http://lists.alioth.debian.org/mailman/listinfo/offlineimap-project Official OfflineIMAP mailing list]<br />
* [http://pbrisbin.com/posts/mutt_gmail_offlineimap/ Mutt + Gmail + Offlineimap] - An outline of brisbin's simple gmail/mutt setup using cron to keep offlineimap syncing.</div>Veoxhttps://wiki.archlinux.org/index.php?title=OfflineIMAP&diff=494492OfflineIMAP2017-10-30T11:12:51Z<p>Veox: /* systemd service */ The $XDG_CONFIG_HOME was missing.</p>
<hr />
<div>[[Category:Email clients]]<br />
[[ja:OfflineIMAP]]<br />
{{Related articles start}}<br />
{{Related|isync}}<br />
{{Related|notmuch}}<br />
{{Related|msmtp}}<br />
{{Related articles end}}<br />
<br />
[http://offlineimap.org/ OfflineIMAP] is a Python utility to sync mail from IMAP servers. It does not work with the POP3 protocol or mbox, and is usually paired with a MUA such as [[Mutt]].<br />
<br />
{{Note|[http://www.offlineimap.org/development/2015/10/08/imapfw-is-made-public.html imapfw] is intened to replace offlineimap in the future.}}<br />
<br />
== Installation ==<br />
<br />
Install {{pkg|offlineimap}}. For a development version, install {{AUR|offlineimap-git}}.<br />
<br />
== Configuration ==<br />
<br />
Offlineimap is distributed with two default configuration files, which are both located in {{ic|/usr/share/offlineimap/}}. {{ic|offlineimap.conf}} contains every setting and is thorougly documented. Alternatively, {{ic|offlineimap.conf.minimal}} is not commented and only contains a small number of settings (see: [[#Minimal|Minimal]]).<br />
<br />
Copy one of the default configuration files to {{ic|~/.offlineimaprc}}.<br />
<br />
{{note|Writing a comment after an option/value on the same line is invalid syntax, hence take care that comments are placed on their own separate line.}}<br />
<br />
=== Minimal ===<br />
<br />
The following file is a commented version of {{ic|offlineimap.conf.minimal}}.<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[general]<br />
# List of accounts to be synced, separated by a comma.<br />
accounts = main<br />
<br />
[Account main]<br />
# Identifier for the local repository; e.g. the maildir to be synced via IMAP.<br />
localrepository = main-local<br />
# Identifier for the remote repository; i.e. the actual IMAP, usually non-local.<br />
remoterepository = main-remote<br />
<br />
[Repository main-local]<br />
# OfflineIMAP supports Maildir, GmailMaildir, and IMAP for local repositories.<br />
type = Maildir<br />
# Where should the mail be placed?<br />
localfolders = ~/mail<br />
<br />
[Repository main-remote]<br />
# Remote repos can be IMAP or Gmail, the latter being a preconfigured IMAP.<br />
type = IMAP<br />
remotehost = host.domain.tld<br />
remoteuser = username<br />
</nowiki>}}<br />
<br />
=== Selective folder synchronization ===<br />
<br />
For synchronizing only certain folders, you can use a [http://offlineimap.org/doc/nametrans.html#folderfilter folderfilter] in the '''remote''' section of the account in {{ic|~/.offlineimaprc}}. For example, the following configuration will only synchronize the folders {{ic|Inbox}} and {{ic|Sent}}:<br />
<br />
{{hc|~/.offlineimaprc|2=<br />
[Repository main-remote]<br />
# Synchronize only the folders Inbox and Sent:<br />
folderfilter = lambda foldername: foldername in ["Inbox", "Sent"]<br />
...<br />
}}<br />
<br />
For more options, see the [http://offlineimap.org/doc/nametrans.html#folderfilter official documentation].<br />
<br />
== Usage ==<br />
<br />
Before running offlineimap, create any parent directories that were allocated to local repositories:<br />
$ mkdir ~/mail<br />
<br />
Now, run the program:<br />
$ offlineimap<br />
<br />
Mail accounts will now be synced. If anything goes wrong, take a closer look at the error messages. OfflineIMAP is usually very verbose about problems; partly because the developers did not bother with taking away tracebacks from the final product.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Running offlineimap in the background ===<br />
<br />
Most other mail transfer agents assume that the user will be using the tool as a [[daemon]] by making the program sync periodically by default. In offlineimap, there are a few settings that control backgrounded tasks.<br />
<br />
Confusingly, they are spread thin all-over the configuration file:<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
# In the general section<br />
[general]<br />
# Controls how many accounts may be synced simultaneously<br />
maxsyncaccounts = 1<br />
<br />
# In the account identifier<br />
[Account main]<br />
# Minutes between syncs<br />
autorefresh = 0.5<br />
# Quick-syncs do not update if the only changes were to IMAP flags.<br />
# autorefresh=0.5 together with quick=10 yields<br />
# 10 quick refreshes between each full refresh, with 0.5 minutes between every <br />
# refresh, regardless of type.<br />
quick = 10<br />
<br />
# In the remote repository identifier<br />
[Repository main-remote]<br />
# Instead of closing the connection once a sync is complete, offlineimap will<br />
# send empty data to the server to hold the connection open. A value of 60<br />
# attempts to hold the connection for a minute between syncs (both quick and<br />
# autorefresh).This setting has no effect if autorefresh and holdconnectionopen<br />
# are not both set.<br />
keepalive = 60<br />
# OfflineIMAP normally closes IMAP server connections between refreshes if<br />
# the global option autorefresh is specified. If you wish it to keep the<br />
# connection open, set this to true. This setting has no effect if autorefresh<br />
# is not set.<br />
holdconnectionopen = yes<br />
</nowiki>}}<br />
<br />
==== systemd service ====<br />
{{Out of date|New offlineimap-oneshot.{timer,service} provided.}}<br />
<br />
Instead of setting OfflineIMAP as a daemon, it can be managed with the packages's provided [[systemd/User]] timer. To use it, [[start/enable]] the user timer {{ic|offlineimap.timer}} using the {{ic|--user}} flag.<br />
<br />
This timer by default runs OfflineIMAP every 15 minutes. This can be easily changed by creating a [[drop-in snippet]]. For example, the following modifies the timer to check every 5 minutes:<br />
<br />
{{hc|~/.config/systemd/user/offlineimap.timer.d/timer.conf|<nowiki><br />
[Timer]<br />
OnUnitInactiveSec=5m<br />
</nowiki>}}<br />
<br />
{{Accuracy|This can lead to inconsistencies in the OfflineIMAP's local database.}}<br />
<br />
For more robust solution it is possible to set a watchdog which will kill OfflineIMAP in case of freeze:<br />
<br />
{{hc|~/.config/systemd/user/offlineimap.service.d/service.conf|<nowiki><br />
[Service]<br />
WatchdogSec=300<br />
</nowiki>}}<br />
<br />
=== Automatic mailbox generation for mutt ===<br />
<br />
[[Mutt]] cannot be simply pointed to an IMAP or maildir directory and be expected to guess which subdirectories happen to be the mailboxes, yet offlineimap can generate a muttrc fragment containing the mailboxes that it syncs.<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[mbnames]<br />
enabled = yes<br />
filename = ~/.mutt/mailboxes<br />
header = "mailboxes "<br />
peritem = "+%(accountname)s/%(foldername)s"<br />
sep = " "<br />
footer = "\n"<br />
</nowiki>}}<br />
<br />
Then add the following lines to {{ic|~/.mutt/muttrc}}.<br />
<br />
{{hc|~/.mutt/muttrc|<nowiki><br />
# IMAP: offlineimap<br />
set folder = "~/mail"<br />
source ~/.mutt/mailboxes<br />
set spoolfile = "+account/INBOX"<br />
set record = "+account/Sent\ Items"<br />
set postponed = "+account/Drafts"<br />
</nowiki>}}<br />
<br />
{{ic|account}} is the name you have given to your IMAP account in {{ic|~/.offlineimaprc}}.<br />
<br />
=== Gmail configuration ===<br />
<br />
This remote repository is configured specifically for Gmail support, substituting folder names in uppercase for lowercase, among other small additions. Keep in mind that this configuration does not sync the ''All Mail'' folder, since it is usually unnecessary and skipping it prevents bandwidth costs:<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[Repository gmail-remote]<br />
type = Gmail<br />
remoteuser = user@gmail.com<br />
remotepass = password<br />
nametrans = lambda foldername: re.sub ('^\[gmail\]', 'bak',<br />
re.sub ('sent_mail', 'sent',<br />
re.sub ('starred', 'flagged',<br />
re.sub (' ', '_', foldername.lower()))))<br />
folderfilter = lambda foldername: foldername not in ['[Gmail]/All Mail']<br />
# Necessary as of OfflineIMAP 6.5.4<br />
sslcacertfile = /etc/ssl/certs/ca-certificates.crt<br />
</nowiki>}}<br />
<br />
{{Note|<br />
* If you have Gmail set to another language, the folder names may appear translated too, e.g. "verzonden_berichten" instead of "sent_mail".<br />
* After version 6.3.5, offlineimap also creates remote folders to match your local ones. Thus you may need a nametrans rule for your local repository too that reverses the effects of this nametrans rule. If you don't want to make a reverse nametrans rule, you can disable remote folder creation by putting this in your remote configuration: {{ic|<nowiki>createfolders = False</nowiki>}}<br />
* As of 1 October 2012 gmail SSL certificate fingerprint is not always the same. This prevents from using {{ic|cert_fingerprint}} and makes the {{ic|sslcacertfile}} way a better solution for the SSL verification (see [[#SSL fingerprint does not match]]).<br />
}}<br />
<br />
=== Password management ===<br />
<br />
==== .netrc ====<br />
<br />
Add the following lines to your {{ic|~/.netrc}}:<br />
<br />
machine hostname.tld<br />
login [your username]<br />
password [your password]<br />
<br />
Do not forget to give the file appropriate rights like 600 or 700:<br />
$ chmod 600 ~/.netrc<br />
<br />
==== Using GPG ====<br />
<br />
GNU Privacy Guard can be used for storing a password in an encrypted file. First set up [[GnuPG]] and then follow the steps in this section. It is assumed that you can use your GPG private key [[GnuPG#gpg-agent|without entering a password]] all the time.<br />
<br />
First type in the password for the email account in a plain text file. Do this in a secure directory with {{ic|700}} permissions located on a [[tmpfs]] to avoid writing the unencrypted password to the disk. Then [[encrypt]] the file with GnuPG setting yourself as the recipient.<br />
<br />
Remove the plain text file since it is no longer needed. Move the encrypted file to the final location, e.g. {{ic|~/.offlineimappass.gpg}}.<br />
<br />
Now create a python function that will decrypt the password:<br />
<br />
{{hc|~/.offlineimap.py|2=<br />
#! /usr/bin/env python2<br />
from subprocess import check_output<br />
<br />
def get_pass():<br />
return check_output("gpg -dq ~/.offlineimappass.gpg", shell=True).strip("\n")<br />
}}<br />
<br />
Load this file from {{ic|~/.offlineimaprc}} and specify the defined function:<br />
<br />
{{hc|~/.offlineimaprc|2=<br />
[general]<br />
# Path to file with arbitrary Python code to be loaded<br />
pythonfile = ~/.offlineimap.py<br />
...<br />
<br />
[Repository ''example'']<br />
# Decrypt and read the encrypted password<br />
remotepasseval = get_pass()<br />
...<br />
}}<br />
<br />
==== Using pass ====<br />
<br />
[[pass]] is a simple password manager from the command line based on GPG.<br />
<br />
First create a password for your email account(s):<br />
<br />
$ pass insert Mail/''account''<br />
<br />
Now create a python function that will decrypt the password:<br />
<br />
{{hc|~/.offlineimap.py|2=<br />
#! /usr/bin/env python2<br />
from subprocess import check_output<br />
<br />
<br />
def get_pass(account):<br />
return check_output("pass Mail/" + account, shell=True).splitlines()[0]<br />
}}<br />
<br />
This is an example for a multi-account setup. You can customize the argument to ''pass'' as defined previously.<br />
<br />
Load this file from {{ic|~/.offlineimaprc}} and specify the defined function: <br />
<br />
{{hc|~/.offlineimaprc|2=<br />
[general]<br />
# Path to file with arbitrary Python code to be loaded<br />
pythonfile = ~/.offlineimap.py<br />
...<br />
<br />
[Repository Gmail]<br />
# Decrypt and read the encrypted password<br />
remotepasseval = get_pass("Gmail")<br />
...<br />
}}<br />
<br />
==== Gnome keyring ====<br />
<br />
In configuration for remote repositories the remoteusereval/remotepasseval fields can be set to custom python code that evaluates to the username/password. The code can be a call to a function defined in a Python script pointed to by 'pythonfile' config field. Create {{ic|~/.offlineimap.py}} according to either of the two options below and use it in the configuration:<br />
<br />
{{bc|<nowiki><br />
[general]<br />
pythonfile = ~/.offlineimap.py<br />
<br />
[Repository examplerepo]<br />
type = IMAP<br />
remotehost = mail.example.com<br />
remoteusereval = get_username("examplerepo")<br />
remotepasseval = get_password("examplerepo")<br />
</nowiki>}}<br />
<br />
===== Option 1: using gnomekeyring Python module =====<br />
Install {{pkg|python2-gnomekeyring}}. Then:<br />
<br />
{{hc|~/.offlineimap.py|<nowiki><br />
#! /usr/bin/env python2<br />
<br />
import gnomekeyring as gkey<br />
<br />
def set_credentials(repo, user, pw):<br />
KEYRING_NAME = "offlineimap"<br />
attrs = { "repo": repo, "user": user }<br />
keyring = gkey.get_default_keyring_sync()<br />
gkey.item_create_sync(keyring, gkey.ITEM_NETWORK_PASSWORD,<br />
KEYRING_NAME, attrs, pw, True)<br />
<br />
def get_credentials(repo):<br />
keyring = gkey.get_default_keyring_sync()<br />
attrs = {"repo": repo}<br />
items = gkey.find_items_sync(gkey.ITEM_NETWORK_PASSWORD, attrs)<br />
return (items[0].attributes["user"], items[0].secret)<br />
<br />
def get_username(repo):<br />
return get_credentials(repo)[0]<br />
def get_password(repo):<br />
return get_credentials(repo)[1]<br />
<br />
if __name__ == "__main__":<br />
import sys<br />
import os<br />
import getpass<br />
if len(sys.argv) != 3:<br />
print "Usage: %s <repository> <username>" \<br />
% (os.path.basename(sys.argv[0]))<br />
sys.exit(0)<br />
repo, username = sys.argv[1:]<br />
password = getpass.getpass("Enter password for user '%s': " % username)<br />
password_confirmation = getpass.getpass("Confirm password: ")<br />
if password != password_confirmation:<br />
print "Error: password confirmation does not match"<br />
sys.exit(1)<br />
set_credentials(repo, username, password)<br />
</nowiki>}}<br />
<br />
To set the credentials, run this script from a shell.<br />
<br />
===== Option 2: using {{AUR|gnome-keyring-query}} tool =====<br />
<br />
{{hc|~/.offlineimap.py|<nowiki><br />
#! /usr/bin/env python2<br />
# executes gnome-keyring-query get passwd<br />
# and returns the output<br />
<br />
import locale<br />
from subprocess import Popen, PIPE<br />
<br />
encoding = locale.getdefaultlocale()[1]<br />
<br />
def get_password(p):<br />
(out, err) = Popen(["gnome-keyring-query", "get", p], stdout=PIPE).communicate()<br />
return out.decode(encoding).strip()<br />
</nowiki>}}<br />
<br />
==== python2-keyring ====<br />
<br />
There is a general solution that should work for any keyring. Install [http://pypi.python.org/pypi/keyring python2-keyring] from [[AUR]], then change your ~/.offlineimaprc to say something like:<br />
<br />
{{bc|<nowiki><br />
[general]<br />
pythonfile = /home/user/offlineimap.py<br />
...<br />
[Repository RemoteEmail]<br />
remoteuser = username@host.net<br />
remotepasseval = keyring.get_password("offlineimap","username@host.net")<br />
...<br />
</nowiki>}}<br />
<br />
and somewhere in ~/offlineimap.py add {{ic|import keyring}}. Now all you have to do is set your password, like so:<br />
<br />
{{bc|$ python2 <br />
>>> import keyring<br />
>>> keyring.set_password("offlineimap","username@host.net", "MYPASSWORD")}}<br />
<br />
and it will grab the password from your (kwallet/gnome-) keyring instead of having to keep it in plaintext or enter it each time.<br />
<br />
==== Emacs EasyPG ====<br />
<br />
See http://www.emacswiki.org/emacs/OfflineIMAP#toc2<br />
<br />
==== KeePass / KeePassX ====<br />
<br />
Install {{AUR|python2-libkeepass}} from the AUR, then add the following to your offlineimap.py file:<br />
<br />
{{bc|<nowiki><br />
#! /usr/bin/env python2<br />
import os, getpass<br />
import libkeepass<br />
<br />
def get_keepass_pw(dbpath, title="", username=""):<br />
if os.path.isfile(dbpath):<br />
with libkeepass.open(<br />
os.path.expanduser(dbpath),<br />
password=getpass.getpass("Master password for '" + dbpath + "': ")) as kdb:<br />
entry = kdb.tree.xpath(<br />
'.//Entry'<br />
'/String/Key[.="Title"]/../Value[.="{title}"]/../..'<br />
'/String/Key[.="UserName"]/../Value[.="{username}"]/../..'<br />
'/String/Key[.="Password"]/../Value'.format(<br />
title=title,<br />
username=username<br />
)<br />
)[0]<br />
return entry.text<br />
else:<br />
print "Error: '" + dbpath + "' does not exist."<br />
return<br />
</nowiki>}}<br />
<br />
Next, edit your ~/.offlineimaprc:<br />
<br />
{{bc|<nowiki><br />
[general]<br />
# VVV Set this path correctly VVV<br />
pythonfile = /home/user/offlineimap.py<br />
...<br />
[Repository RemoteEmail]<br />
remoteuser = username@host.net<br />
# Set the DB path as well as the title and username of the specific entry you'd like to use.<br />
# This will prompt you on STDIN at runtime for the kdb master password.<br />
remotepasseval = get_keepass_pw("/path/to/database.kdb", title="<entry title>", username="<entry username>")<br />
...<br />
</nowiki>}}<br />
<br />
Note that as-is, this does not support KDBs with keyfiles, only KDBs with password-only auth.<br />
<br />
=====Old kdb format =====<br />
If your key database is stored in an old format, you the xpath strings may not be correct. This method should work in that case, but it's not compatible with the current default format (v4)<br />
<br />
Install {{AUR|python2-keepass-git}} from the AUR, then add the following to your offlineimap.py file:<br />
<br />
{{bc|<nowiki><br />
#! /usr/bin/env python2<br />
import os, getpass<br />
from keepass import kpdb<br />
<br />
def get_keepass_pw(dbpath, title="", username=""):<br />
if os.path.isfile(dbpath):<br />
db = kpdb.Database(dbpath, getpass.getpass("Master password for '" + dbpath + "': "))<br />
for entry in db.entries:<br />
if (entry.title == title) and (entry.username == username):<br />
return entry.password<br />
else:<br />
print "Error: '" + dbpath + "' does not exist."<br />
return<br />
<br />
</nowiki>}}<br />
<br />
=== Kerberos authentication ===<br />
<br />
Install {{AUR|python2-kerberos}} from [[AUR]] and do not specify remotepass in your .offlineimaprc. <br />
OfflineImap figure out the reset all if have a valid Kerberos TGT. <br />
If you have 'maxconnections', it will fail for some connection. Comment 'maxconnections' out will solve this problem.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Overriding UI and autorefresh settings ===<br />
<br />
For the sake of troubleshooting, it is sometimes convenient to launch offlineimap with a more verbose UI, no background syncs and perhaps even a debug level:<br />
$ offlineimap [ -o ] [ -d <debug_type> ] [ -u <ui> ]<br />
;-o<br />
:Disable autorefresh, keepalive, etc.<br />
<br />
;-d <debug_type><br />
:Where ''<debug_type>'' is one of {{Ic|imap}}, {{Ic|maildir}} or {{Ic|thread}}. Debugging imap and maildir are, by far, the most useful.<br />
<br />
;-u <ui><br />
:Where ''<ui>'' is one of {{Ic|CURSES.BLINKENLIGHTS}}, {{Ic|TTY.TTYUI}}, {{Ic|NONINTERACTIVE.BASIC}}, {{Ic|NONINTERACTIVE.QUIET}} or {{Ic|MACHINE.MACHINEUI}}. TTY.TTYUI is sufficient for debugging purposes.<br />
<br />
{{Note|More recent versions use the following for <ui>: {{Ic|blinkenlights}}, {{Ic|ttyui}}, {{Ic|basic}}, {{Ic|quiet}} or {{Ic|machineui}}.}}<br />
<br />
=== Folder could not be created ===<br />
<br />
In version 6.5.3, offlineimap gained the ability to create folders in the remote repository, as described [http://comments.gmane.org/gmane.mail.imap.offlineimap.general/4784 here].<br />
<br />
This can lead to errors of the following form when using {{Ic|nametrans}} on the remote repository:<br />
ERROR: Creating folder bar on repository foo-remote<br />
Folder 'bar'[foo-remote] could not be created. Server responded: ('NO', ['[ALREADYEXISTS] Duplicate folder name bar (Failure)'])<br />
<br />
The solution is to provide an inverse {{Ic|nametrans}} lambda for the local repository, e.g.<br />
<br />
{{hc|~/.offlineimaprc|<nowiki><br />
[Repository foo-local]<br />
nametrans = lambda foldername: foldername.replace('bar', 'BAR')<br />
<br />
[Repository foo-remote]<br />
nametrans = lambda foldername: foldername.replace('BAR', 'bar')<br />
</nowiki>}}<br />
<br />
* For working out the correct inverse mapping. the output of {{Ic|offlineimap --info}} should help.<br />
* After updating the mapping, it may be necessary to remove all of the folders under {{Ic|$HOME/.offlineimap/}} for the affected accounts.<br />
<br />
=== SSL fingerprint does not match ===<br />
<br />
ERROR: Server SSL fingerprint 'keykeykey' for hostname 'example.com' does not match configured fingerprint. Please verify and set 'cert_fingerprint' accordingly if not set yet.<br />
<br />
To solve this, add to {{ic|~/.offlineimaprc}} (in the same section as {{ic|1=ssl = yes}}) one of the following:<br />
* either add {{ic|cert_fingerprint}}, with the certificate fingerprint of the remote server. This checks whether the remote server certificate matches the given fingerprint.<br />
cert_fingerprint = keykeykey<br />
* or add {{ic|sslcacertfile}} with the path to the system CA certificates file. Needs {{Pkg|ca-certificates}} installed. This validates the remote ssl certificate chain against the Certification Authorities in that file.<br />
sslcacertfile = /etc/ssl/certs/ca-certificates.crt<br />
<br />
=== Copying message, connection closed ===<br />
<br />
ERROR: Copying message -2 [acc: email] connection closed<br />
Folder sent [acc: email]: ERROR: while syncing sent [account email] connection closed<br />
<br />
Cause of this can be creation of same message both locally and on server. This happens if your email provider automatically saves sent mails to same folder as your local client. If you encounter this, disable save of sent messages in your local client.<br />
<br />
== See also ==<br />
<br />
* [http://lists.alioth.debian.org/mailman/listinfo/offlineimap-project Official OfflineIMAP mailing list]<br />
* [http://pbrisbin.com/posts/mutt_gmail_offlineimap/ Mutt + Gmail + Offlineimap] - An outline of brisbin's simple gmail/mutt setup using cron to keep offlineimap syncing.</div>Veoxhttps://wiki.archlinux.org/index.php?title=Talk:Systemd-boot&diff=494186Talk:Systemd-boot2017-10-27T15:54:10Z<p>Veox: /* Move section "Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook" */ Or is this a bug?..</p>
<hr />
<div>== About reboot into firmware configuration interface ==<br />
If I remember correctly, isn't that one of the entries that are auto-generated by gummiboot, along with windows entries? [[User:Moviuro|Moviuro]] ([[User talk:Moviuro|talk]]) 12:41, 15 March 2015 (UTC)<br />
: AFAIK, there aren't any autogenerated default options in the Gummiboot boot manager to reboot into the firmware. I only have the autodetected {{ic|efiboomgr}} entry which appears when Windows is installed and the entries I defined manually. -- [[User:wget|wget]] ([[User talk:wget|talk]]) 13:08, 15 March 2015 (UTC)<br />
: I spent some time googleing about this issue. Without success. So it would save time and pain to add some working example. It's a useful feature. --[[User:Cschlote|Cschlote]] ([[User talk:Cschlote|talk]]) 00:26, 16 March 2015 (UTC)<br />
:: http://wstaw.org/w/3gSV/ And I don't have a Reboot into device firmware entry, nor the EFI default loader [[User:Moviuro|Moviuro]] ([[User talk:Moviuro|talk]]) 22:35, 21 March 2015 (UTC)<br />
::: I already found several real machines, which will either show none of these entries, or just one of them. Only a VM shows all entries. --[[User:Cschlote|Cschlote]] ([[User talk:Cschlote|talk]]) 12:32, 23 March 2015 (UTC)<br />
:::: Well, this picture was taken on my Dell Latitude (E6430). [[User:Moviuro|Moviuro]] ([[User talk:Moviuro|talk]]) 15:28, 23 March 2015 (UTC)<br />
<br />
== $esp pseudo-var ==<br />
<br />
I'd suggest replacing {{ic|$esp}} that's prominent in the article with the standard {{ic|/boot}}, i.e. replace<br />
:{{ic|$esp}} is used to denote the mountpoint in this article. <br />
with<br />
:In this article, {{ic|/boot}} is used as the mountpoint.<br />
And replace {{ic|$esp}} instances accordingly. Changing the mountpoint is immediate to anyone who wants to do so, so another pseudo-var isn't required.<br />
<br />
In addition, it's confusing to those who wish to go with the recommended {{ic|/boot}} mountpoint. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 10:33, 5 January 2016 (UTC)<br />
<br />
:While that could make sense, we'd create an inconsistency with all the other boot loader articles, which do use $esp as well: [[GRUB]], [[Syslinux]], [[EFISTUB]] and [[rEFInd]]. Personally, I don't think that using a variable like that is creating confusion, especially after [https://wiki.archlinux.org/index.php?title=Beginners%27_guide&diff=414434&oldid=414433]. — [[User:Kynikos|Kynikos]] ([[User talk:Kynikos|talk]]) 07:12, 9 January 2016 (UTC)<br />
<br />
== Keys inside the boot menu, clarification ==<br />
<br />
once you use keys to change timeout (-,T,+,t), this setting is saved in a non-volatile EFI variable;<br />
in this way `loader.conf` setting is overridden; a) how to clear the non-volatile EFI variable? b) how to show values of non-volatile EFI variables? --[[User:NTia89|nTia89]] ([[User talk:NTia89|talk]]) 14:22, 27 May 2016 (UTC)<br />
<br />
:https://bbs.archlinux.org/viewtopic.php?pid=1733461#p1733461<br />
:a) maybe reflashing the bios ^^ or efivar -w (be carefully you can brick your motherboard) b) also with efivar --name var -p<br />
:{{unsigned|21:13, 31 August 2017|Fallback}}<br />
<br />
== The section about Windows 8+ overriding boot settings is incorrect ==<br />
<br />
I have a dual boot with Windows 10 and it does not override your boot settings or make Windows the default at each boot as explained in the wiki (it might during a major upgrade that work more or less as a new install, like Windows 8 -> Windows 10).<br />
<br />
The fact is that, Windows normally manage the default boot efi file $esp\boot\efi\bootx64.efi and keep it identical to $esp/EFI/Microsoft/Boot/bootmgfw.efi . This file is often updated (I don't know if it is at every boot, but very often). The default installation of systemd-boot put also a copy of itself at the default $esp\boot\efi\bootx64.efi and this can gives a conflict because it will be overwritten by Windows. If we manage to correctly put a default boot entry in the firmware that is not $esp\boot\efi\bootx64.efi, there will be no problems. If we have a motherboard that can only boot $esp\boot\efi\bootx64.efi then and only then we are in trouble and the work around described in the wiki can make sense. It would be safer not touching $esp\boot\efi\bootx64.efi, I think Windows expect we do not touch this file.<br />
<br />
== splash ==<br />
<br />
add a section which talks about splash feature. --[[User:NTia89|nTia89]] ([[User talk:NTia89|talk]]) 09:40, 9 July 2016 (UTC)<br />
<br />
== Move section "Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook" ==<br />
<br />
I've added it here after experiencing the issue when switching {{ic|mkinitcpio}} to use {{ic|systemd}} hooks instead of "regular" ones. I'm not sure how I got the idea that it's a necessary step when using {{ic|systemd-boot}}. It's not: the [[systemd-boot]] page mostly assumes "regular" {{ic|mkinitcpio}} hooks. This section is indeed misplaced.<br />
<br />
Perhaps it makes more sense to move to [[mkinitcpio]], and not [[LVM]]?.. -- [[User:Veox|Veox]] ([[User talk:Veox|talk]]) 15:35, 27 October 2017 (UTC)<br />
<br />
: Or perhaps some other page's sub-section on {{ic|mkinitcpio}}?.. -- [[User:Veox|Veox]] ([[User talk:Veox|talk]]) 15:42, 27 October 2017 (UTC)<br />
<br />
: To be honest, I'm not even sure if this isn't a bug with {{ic|mkinitcpio}}'s {{ic|sd-encrypt}} hook. Sorry for the noise. :( -- [[User:Veox|Veox]] ([[User talk:Veox|talk]]) 15:54, 27 October 2017 (UTC)</div>Veoxhttps://wiki.archlinux.org/index.php?title=Talk:Systemd-boot&diff=494183Talk:Systemd-boot2017-10-27T15:43:21Z<p>Veox: /* Move section "Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook" */ Where to move?</p>
<hr />
<div>== About reboot into firmware configuration interface ==<br />
If I remember correctly, isn't that one of the entries that are auto-generated by gummiboot, along with windows entries? [[User:Moviuro|Moviuro]] ([[User talk:Moviuro|talk]]) 12:41, 15 March 2015 (UTC)<br />
: AFAIK, there aren't any autogenerated default options in the Gummiboot boot manager to reboot into the firmware. I only have the autodetected {{ic|efiboomgr}} entry which appears when Windows is installed and the entries I defined manually. -- [[User:wget|wget]] ([[User talk:wget|talk]]) 13:08, 15 March 2015 (UTC)<br />
: I spent some time googleing about this issue. Without success. So it would save time and pain to add some working example. It's a useful feature. --[[User:Cschlote|Cschlote]] ([[User talk:Cschlote|talk]]) 00:26, 16 March 2015 (UTC)<br />
:: http://wstaw.org/w/3gSV/ And I don't have a Reboot into device firmware entry, nor the EFI default loader [[User:Moviuro|Moviuro]] ([[User talk:Moviuro|talk]]) 22:35, 21 March 2015 (UTC)<br />
::: I already found several real machines, which will either show none of these entries, or just one of them. Only a VM shows all entries. --[[User:Cschlote|Cschlote]] ([[User talk:Cschlote|talk]]) 12:32, 23 March 2015 (UTC)<br />
:::: Well, this picture was taken on my Dell Latitude (E6430). [[User:Moviuro|Moviuro]] ([[User talk:Moviuro|talk]]) 15:28, 23 March 2015 (UTC)<br />
<br />
== $esp pseudo-var ==<br />
<br />
I'd suggest replacing {{ic|$esp}} that's prominent in the article with the standard {{ic|/boot}}, i.e. replace<br />
:{{ic|$esp}} is used to denote the mountpoint in this article. <br />
with<br />
:In this article, {{ic|/boot}} is used as the mountpoint.<br />
And replace {{ic|$esp}} instances accordingly. Changing the mountpoint is immediate to anyone who wants to do so, so another pseudo-var isn't required.<br />
<br />
In addition, it's confusing to those who wish to go with the recommended {{ic|/boot}} mountpoint. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 10:33, 5 January 2016 (UTC)<br />
<br />
:While that could make sense, we'd create an inconsistency with all the other boot loader articles, which do use $esp as well: [[GRUB]], [[Syslinux]], [[EFISTUB]] and [[rEFInd]]. Personally, I don't think that using a variable like that is creating confusion, especially after [https://wiki.archlinux.org/index.php?title=Beginners%27_guide&diff=414434&oldid=414433]. — [[User:Kynikos|Kynikos]] ([[User talk:Kynikos|talk]]) 07:12, 9 January 2016 (UTC)<br />
<br />
== Keys inside the boot menu, clarification ==<br />
<br />
once you use keys to change timeout (-,T,+,t), this setting is saved in a non-volatile EFI variable;<br />
in this way `loader.conf` setting is overridden; a) how to clear the non-volatile EFI variable? b) how to show values of non-volatile EFI variables? --[[User:NTia89|nTia89]] ([[User talk:NTia89|talk]]) 14:22, 27 May 2016 (UTC)<br />
<br />
:https://bbs.archlinux.org/viewtopic.php?pid=1733461#p1733461<br />
:a) maybe reflashing the bios ^^ or efivar -w (be carefully you can brick your motherboard) b) also with efivar --name var -p<br />
:{{unsigned|21:13, 31 August 2017|Fallback}}<br />
<br />
== The section about Windows 8+ overriding boot settings is incorrect ==<br />
<br />
I have a dual boot with Windows 10 and it does not override your boot settings or make Windows the default at each boot as explained in the wiki (it might during a major upgrade that work more or less as a new install, like Windows 8 -> Windows 10).<br />
<br />
The fact is that, Windows normally manage the default boot efi file $esp\boot\efi\bootx64.efi and keep it identical to $esp/EFI/Microsoft/Boot/bootmgfw.efi . This file is often updated (I don't know if it is at every boot, but very often). The default installation of systemd-boot put also a copy of itself at the default $esp\boot\efi\bootx64.efi and this can gives a conflict because it will be overwritten by Windows. If we manage to correctly put a default boot entry in the firmware that is not $esp\boot\efi\bootx64.efi, there will be no problems. If we have a motherboard that can only boot $esp\boot\efi\bootx64.efi then and only then we are in trouble and the work around described in the wiki can make sense. It would be safer not touching $esp\boot\efi\bootx64.efi, I think Windows expect we do not touch this file.<br />
<br />
== splash ==<br />
<br />
add a section which talks about splash feature. --[[User:NTia89|nTia89]] ([[User talk:NTia89|talk]]) 09:40, 9 July 2016 (UTC)<br />
<br />
== Move section "Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook" ==<br />
<br />
I've added it here after experiencing the issue when switching {{ic|mkinitcpio}} to use {{ic|systemd}} hooks instead of "regular" ones. I'm not sure how I got the idea that it's a necessary step when using {{ic|systemd-boot}}. It's not: the [[systemd-boot]] page mostly assumes "regular" {{ic|mkinitcpio}} hooks. This section is indeed misplaced.<br />
<br />
Perhaps it makes more sense to move to [[mkinitcpio]], and not [[LVM]]?.. -- [[User:Veox|Veox]] ([[User talk:Veox|talk]]) 15:35, 27 October 2017 (UTC)<br />
<br />
: Or perhaps some other page's sub-section on {{ic|mkinitcpio}}?.. -- [[User:Veox|Veox]] ([[User talk:Veox|talk]]) 15:42, 27 October 2017 (UTC)</div>Veoxhttps://wiki.archlinux.org/index.php?title=Systemd-boot&diff=494182Systemd-boot2017-10-27T15:39:05Z<p>Veox: /* Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook */ Update note (ping!).</p>
<hr />
<div>{{lowercase title}}<br />
[[Category:Boot loaders]]<br />
[[de:Gummiboot]]<br />
[[es:Systemd-boot]]<br />
[[ja:Systemd-boot]]<br />
[[ru:Systemd-boot]]<br />
[[zh-hans:Systemd-boot]]<br />
{{Related articles start}}<br />
{{Related|Arch boot process}}<br />
{{Related|Boot loaders}}<br />
{{Related|Secure Boot}}<br />
{{Related|Unified Extensible Firmware Interface}}<br />
{{Related articles end}}<br />
<br />
'''systemd-boot''', previously called '''gummiboot''', is a simple UEFI boot manager which executes configured EFI images. The default entry is selected by a configured pattern (glob) or an on-screen menu. It is included with {{pkg|systemd}}, which is installed on Arch system by default.<br />
<br />
It is simple to configure but it can only start EFI executables such as the Linux kernel [[EFISTUB]], UEFI Shell, GRUB, the Windows Boot Manager.<br />
<br />
== Installation ==<br />
<br />
=== EFI boot ===<br />
<br />
# Make sure you are booted in UEFI mode.<br />
# Verify [[Unified_Extensible_Firmware_Interface#Requirements_for_UEFI_variable_support|your EFI variables are accessible]].<br />
# Mount your [[EFI System Partition]] (ESP) properly. {{ic|''esp''}} is used to denote the mountpoint in this article. {{Note|''systemd-boot'' cannot load EFI binaries from other partitions. It is therefore recommended to mount your ESP to {{ic|/boot}}. In case you want to separate {{ic|/boot}} from the ESP see [[#Manually]] for more information.}}<br />
# If the ESP is '''not''' mounted at {{ic|/boot}}, then copy your kernel and initramfs onto that ESP. {{Note|For a way to automatically keep the kernel updated on the ESP, have a look at [[EFISTUB#Using systemd]] for some systemd units that can be adapted. If your EFI System Partition is using automount, you may need to add {{ic|vfat}} to a file in {{ic|/etc/modules-load.d/}} to ensure the current running kernel has the {{ic|vfat}} module loaded at boot, before any kernel update happens that could replace the module for the currently running version making the mounting of {{ic|/boot/efi}} impossible until reboot.}}<br />
# Type the following command to install ''systemd-boot'': {{bc|1=# bootctl --path=''esp'' install}} It will copy the ''systemd-boot'' binary to your EFI System Partition ({{ic|''esp''/EFI/systemd/systemd-bootx64.efi}} and {{ic|''esp''/EFI/Boot/BOOTX64.EFI}} – both of which are identical – on x86-64 systems) and add ''systemd-boot'' itself as the default EFI application (default boot entry) loaded by the EFI Boot Manager.<br />
# Finally you must [[#Configuration|configure]] the boot loader to function properly.<br />
<br />
=== BIOS boot ===<br />
<br />
{{Warning|This is not recommended.}}<br />
You can successfully install ''systemd-boot'' if booted with in BIOS mode. However, this process requires you to tell firmware to launch ''systemd-boot'''s EFI file at boot, usually via two ways:<br />
<br />
* you have a working EFI Shell somewhere else.<br />
<br />
* your firmware interface provides a way of properly setting the EFI file that needs to be loaded at boot time.<br />
<br />
If you can do it, the installation is easier: go into your EFI Shell or your firmware configuration interface and change your machine's default EFI file to {{ic|''esp''/EFI/systemd/systemd-bootx64.efi}} ( or {{ic|systemd-bootia32.efi}} depending if your system firmware is 32 bit).<br />
<br />
{{Note|the firmware interface of Dell Latitude series provides everything you need to setup EFI boot but the EFI Shell won't be able to write to the computer's ROM.}}<br />
<br />
=== Updating ===<br />
<br />
Unlike the previous separate ''gummiboot'' package, which updated automatically on a new package release with a {{ic|post_install}} script, updates of new ''systemd-boot'' versions must now be done manually by the user. However the procedure can be automated using pacman hooks.<br />
<br />
==== Manually ====<br />
<br />
''systemd-boot'' ({{man|1|bootctl}}) assumes that your EFI System Partition is mounted on {{ic|/boot}}.<br />
<br />
# bootctl update<br />
<br />
If the ESP is not mounted on {{ic|/boot}}, the {{ic|1=--path=}} option can pass it. For example: <br />
<br />
# bootctl --path=''esp'' update<br />
<br />
{{Note|This is also the command to use when migrating from ''gummiboot'', before removing that package. If that package has already been removed, however, run {{ic|1=bootctl --path=''esp'' install}}.}}<br />
<br />
==== Automatically ====<br />
<br />
The [[AUR]] package {{AUR|systemd-boot-pacman-hook}} provides a [[Pacman#Hooks|Pacman hook]] to automate the update process. [[Install|Installing]] the package will add a hook which will be executed every time the {{Pkg|systemd}} package is upgraded.<br />
<br />
Alternatively, place the following pacman hook in the {{ic|/etc/pacman.d/hooks/}} directory:<br />
<br />
{{hc|/etc/pacman.d/hooks/systemd-boot.hook|2=<br />
[Trigger]<br />
Type = Package<br />
Operation = Upgrade<br />
Target = systemd<br />
<br />
[Action]<br />
Description = Updating systemd-boot...<br />
When = PostTransaction<br />
Exec = /usr/bin/bootctl update<br />
}}<br />
<br />
== Configuration ==<br />
<br />
=== Basic configuration ===<br />
<br />
The basic configuration is stored in {{ic|''esp''/loader/loader.conf}} file and it is composed by three options:<br />
<br />
* {{ic|default}} – default entry to select (without the {{ic|.conf}} suffix); can be a wildcard like {{ic|arch-*}}.<br />
<br />
* {{ic|timeout}} – menu timeout in seconds. If this is not set, the menu will only be shown on key press during boot.<br />
<br />
* {{ic|editor}} – whether to enable the kernel parameters editor or not. {{ic|1}} (default) is enabled, {{ic|0}} is disabled; since the user can add {{ic|1=init=/bin/bash}} to bypass root password and gain root access, it is strongly recommended to set this option to {{ic|0}}.<br />
<br />
Example:<br />
<br />
{{hc|''esp''/loader/loader.conf|<br />
default arch<br />
timeout 4<br />
editor 0<br />
}}<br />
<br />
{{Note|The first 2 options can be changed in the boot menu itself and changes will be stored as EFI variables.}}<br />
<br />
{{Tip|A basic configuration file example is located at {{ic|/usr/share/systemd/bootctl/loader.conf}}.}}<br />
<br />
=== Adding boot entries ===<br />
<br />
{{Note|<br />
* ''bootctl'' will automatically check for "'''Windows Boot Manager'''" ({{ic|\EFI\Microsoft\Boot\Bootmgfw.efi}}), "'''EFI Shell'''" ({{ic|\shellx64.efi}}) and "'''EFI Default Loader'''" ({{ic|\EFI\Boot\bootx64.efi}}) at boot time, as well as specially prepared kernel files found in {{ic|\EFI\Linux}}. When detected, corresponding entries with titles {{ic|auto-windows}}, {{ic|auto-efi-shell}} and {{ic|auto-efi-default}}, respectively, will be automatically generated. These entries do not require manual loader configuration. However, it does not auto-detect other EFI applications (unlike [[rEFInd]]), so for booting the Linux kernel, manual configuration entries must be created.<br />
<br />
* If you dual-boot Windows, it is strongly recommended to disable its default [[Dual boot with Windows#Fast_Start-Up|Fast Start-Up]] option.<br />
* Remember to load the intel [[microcode]] with {{ic|initrd}} if applicable.<br />
* You can find the {{ic|PARTUUID}} for your root partition with the command {{ic|1=blkid -s PARTUUID -o value /dev/sd''xY''}}, where {{ic|''x''}} is the device letter and {{ic|''Y''}} is the partition number. This is required only for your root partition, not {{ic|''esp''}}.}}<br />
<br />
''bootctl'' searches for boot menu items in {{ic|''esp''/loader/entries/*.conf}} – each file found must contain exactly one boot entry. The possible options are:<br />
<br />
* {{ic|title}} – operating system name. '''Required.'''<br />
<br />
* {{ic|version}} – kernel version, shown only when multiple entries with same title exist. Optional.<br />
<br />
* {{ic|machine-id}} – machine identifier from {{ic|/etc/machine-id}}, shown only when multiple entries with same title and version exist. Optional.<br />
<br />
* {{ic|efi}} – EFI program to start, relative to your ESP ({{ic|''esp''}}); e.g. {{ic|/vmlinuz-linux}}. Either this or {{ic|linux}} (see below) is '''required.'''<br />
<br />
* {{ic|options}} – command line options to pass to the EFI program or kernel boot parameters. Optional, but you will need at least {{ic|1=initrd=''efipath''}} and {{ic|1=root=''dev''}} if booting Linux.<br />
<br />
For Linux, you can specify {{ic|linux ''path-to-vmlinuz''}} and {{ic|initrd ''path-to-initramfs''}}; this will be automatically translated to {{ic|efi ''path''}} and {{ic|1=options initrd=''path''}} – this syntax is only supported for convenience and has no differences in function.<br />
<br />
{{Style|There shouldn't be so many examples for specifying mount options or [[kernel parameters]].}}<br />
<br />
==== Standard root installations ====<br />
<br />
Here is an example entry for a root partition without LVM or LUKS:<br />
<br />
{{hc|''esp''/loader/entries/arch.conf|2=<br />
title Arch Linux<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=PARTUUID=14420948-2cea-4de7-b042-40f67c618660 rw<br />
}}<br />
<br />
Please note in the example above that {{ic|PARTUUID}}/{{ic|PARTLABEL}} identifies a GPT partition, and differs from {{ic|UUID}}/{{ic|LABEL}}, which identifies a filesystem. Using the {{ic|PARTUUID}}/{{ic|PARTLABEL}} is advantageous because it is invariant (i.e. unchanging) if you reformat the partition with another filesystem, or if the {{ic|/dev/sd* }}mapping changed for some reason. It is also useful if you do not have a filesystem on the partition (or use LUKS, which does not support {{ic|LABEL}}s).<br />
<br />
{{Tip|An example entry file is located at {{ic|/usr/share/systemd/bootctl}}.}}<br />
<br />
==== LVM root installations ====<br />
<br />
{{Warning|''systemd-boot'' cannot be used without a separate {{ic|/boot}} filesystem outside of LVM.}}<br />
<br />
Here is an example for a root partition using [[LVM|Logical Volume Management]]:<br />
<br />
{{hc|''esp''/loader/entries/arch-lvm.conf|2=<br />
title Arch Linux (LVM)<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=/dev/mapper/<VolumeGroup-LogicalVolume> rw<br />
}}<br />
<br />
Replace {{ic|<VolumeGroup-LogicalVolume>}} with the actual VG and LV names (e.g. {{ic|1=root=/dev/mapper/volgroup00-lvolroot}}). Alternatively, it is also possible to use a UUID instead:<br />
....<br />
options root=UUID=<UUID identifier> rw<br />
<br />
Note that {{ic|1=root='''UUID'''=}} is used instead of {{ic|1=root='''PARTUUID'''=}}, which is used for Root partitions without LVM or LUKS.<br />
<br />
==== Encrypted Root Installations ====<br />
<br />
Here is an example configuration file for an encrypted root partition ([[Dm-crypt|DM-Crypt / LUKS]]) using the {{ic|encrypt}} [[mkinitcpio]] hook:<br />
<br />
{{hc|''esp''/loader/entries/arch-encrypted.conf|2=<br />
title Arch Linux Encrypted<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options cryptdevice=UUID=<UUID>:<mapped-name> root=/dev/mapper/<mapped-name> quiet rw<br />
}}<br />
<br />
UUID is used in this example; {{ic|PARTUUID}} should be able to replace the UUID, if so desired. You may also replace the {{ic|/dev}} path with a regular UUID. {{ic|mapped-name}} is whatever you want it to be called. See [[Dm-crypt/System configuration#Boot loader]].<br />
<br />
If you are using LVM, your cryptdevice line will look like this:<br />
<br />
{{hc|''esp''/loader/entries/arch-encrypted-lvm.conf|2=<br />
title Arch Linux Encrypted LVM<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options cryptdevice=UUID=<UUID>:MyVolGroup root=/dev/mapper/MyVolGroup-MyVolRoot quiet rw<br />
}}<br />
<br />
You can also add other EFI programs such as {{ic|\EFI\arch\grub.efi}}.<br />
<br />
==== btrfs subvolume root installations ====<br />
<br />
If booting a [[btrfs]] subvolume as root, amend the {{ic|options}} line with {{ic|rootflags<nowiki>=</nowiki>subvol<nowiki>=</nowiki><root subvolume>}}. In the example below, root has been mounted as a btrfs subvolume called 'ROOT' (e.g. {{ic|mount -o subvol<nowiki>=</nowiki>ROOT /dev/sdxY /mnt}}):<br />
<br />
{{hc|''esp''/loader/entries/arch-btrfs-subvol.conf|2=<br />
title Arch Linux<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=PARTUUID=14420948-2cea-4de7-b042-40f67c618660 rw rootflags<nowiki>=</nowiki>subvol<nowiki>=</nowiki>ROOT<br />
}}<br />
<br />
A failure to do so will otherwise result in the following error message: {{ic|ERROR: Root device mounted successfully, but /sbin/init does not exist.}}<br />
<br />
==== ZFS root installations ====<br />
<br />
When booting from a [[ZFS]] dataset, add {{ic|zfs<nowiki>=</nowiki><root dataset>}} to the {{ic|options}} line. Here the root dataset has been set to 'zroot/ROOT/default':<br />
<br />
{{hc|''esp''/loader/entries/arch-zfs.conf|2=<br />
title Arch Linux ZFS<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options zfs=zroot/ROOT/default rw<br />
}}<br />
<br />
When booting off of a ZFS dataset ensure that it has had the {{ic|bootfs}} property set with {{ic| zpool set bootfs<nowiki>=</nowiki><root dataset> <zpool>}}.<br />
<br />
==== EFI Shells or other EFI apps ====<br />
<br />
In case you installed EFI shells and other EFI application into the ESP, you can use the following snippets:<br />
<br />
{{hc|''esp''/loader/entries/uefi-shell-v1-x86_64.conf|2=<br />
title UEFI Shell x86_64 v1<br />
efi /EFI/shellx64_v1.efi<br />
}}<br />
<br />
{{hc|''esp''/loader/entries/uefi-shell-v2-x86_64.conf|2=<br />
title UEFI Shell x86_64 v2<br />
efi /EFI/shellx64_v2.efi<br />
}}<br />
<br />
{{Expansion|Add example on how to boot into EFI firmware setup.}}<br />
<br />
=== Preparing kernels for EFI\Linux ===<br />
<br />
{{Style|Does not belong here, not specific to systemd-boot.}}<br />
<br />
''EFI\Linux'' is searched for specially prepared kernel files, which bundle the kernel, the initrd, the kernel command line and /etc/os-release into one file. This file can be easily signed for secure boot.<br />
<br />
Create the bundle file like this:<br />
<br />
{{hc|Kernel packaging command:|<nowiki>objcopy \<br />
--add-section .osrel="/usr/lib/os-release" --change-section-vma .osrel=0x20000 \<br />
--add-section .cmdline="kernel command line" --change-section-vma .cmdline=0x30000 \<br />
--add-section .linux="vmlinuz-file" --change-section-vma .linux=0x40000 \<br />
--add-section .initrd="initrd-file" --change-section-vma .initrd=0x3000000 \<br />
"/usr/lib/systemd/boot/efi/linuxx64.efi.stub" "linux.efi"</nowiki>}}<br />
<br />
Optionally sign ''linux.efi'' now (e.g. using ''sbsigntools'' from AUR).<br />
<br />
Copying ''linux.efi'' into ''{{ic|''esp''\EFI\Linux}}''.<br />
<br />
=== Support hibernation ===<br />
<br />
See [[Suspend and hibernate]].<br />
<br />
=== Kernel parameters editor with password protection ===<br />
<br />
Alternatively you can install {{AUR|systemd-boot-password}} which supports {{ic|password}} basic configuration option. Use {{ic|sbpctl generate}} to generate a value for this option.<br />
<br />
Install ''systemd-boot-password'' with the following command:<br />
<br />
{{bc|1=# sbpctl install ''esp''}}<br />
<br />
With enabled editor you will be prompted for your password before you can edit kernel parameters.<br />
<br />
== Keys inside the boot menu ==<br />
<br />
The following keys are used inside the menu:<br />
* {{ic|Up/Down}} - select entry<br />
* {{ic|Enter}} - boot the selected entry<br />
* {{ic|d}} - select the default entry to boot (stored in a non-volatile EFI variable)<br />
* {{ic|-/T}} - decrease the timeout (stored in a non-volatile EFI variable)<br />
* {{ic|+/t}} - increase the timeout (stored in a non-volatile EFI variable)<br />
* {{ic|e}} - edit the kernel command line. It has no effect if the {{ic|editor}} config option is set to {{ic|0}}.<br />
* {{ic|v}} - show the gummiboot and UEFI version<br />
* {{ic|Q}} - quit<br />
* {{ic|P}} - print the current configuration<br />
* {{ic|h/?}} - help<br />
<br />
These hotkeys will, when pressed inside the menu or during bootup, directly boot<br />
a specific entry:<br />
<br />
* {{ic|l}} - Linux<br />
* {{ic|w}} - Windows<br />
* {{ic|a}} - OS X<br />
* {{ic|s}} - EFI Shell<br />
* {{ic|1-9}} - number of entry<br />
<br />
== Troubleshooting ==<br />
<br />
=== Manual entry using efibootmgr ===<br />
<br />
If {{ic|bootctl install}} command failed, you can create a EFI boot entry manually using {{Pkg|efibootmgr}}:<br />
<br />
# efibootmgr -c -d /dev/sdX -p Y -l /EFI/systemd/systemd-bootx64.efi -L "Linux Boot Manager"<br />
<br />
where {{ic|/dev/sdXY}} is the [[EFI System Partition]].<br />
<br />
=== Menu does not appear after Windows upgrade ===<br />
<br />
See [[UEFI#Windows changes boot order]].<br />
<br />
=== Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook ===<br />
<br />
{{Move|mkinitcpio|More relevant to ''mkinitcpio'' than ''systemd-boot''.|Talk:Systemd-boot#Move_section_.22Non-root_drives_are_not_decrypted_by_sd-lvm2_mkinitcpio_hook.22}}<br />
<br />
Even if the drives are listed in {{ic|/etc/crypttab}}, {{ic|systemd-cryptsetup-generator}} may choose to skip them. From {{man|8|systemd-cryptsetup-generator}}:<br />
<br />
If /etc/crypttab exists, only those UUIDs specified on the kernel<br />
command line will be activated in the initrd or the real root.<br />
<br />
This may make the boot process fail, due to LVM's inability to locate the physical volumes. The latter manifests as timeout errors:<br />
<br />
Timed out waiting for device dev-mapper-VolGroup00\x2dLVNAME.device.<br />
<br />
(This is not unexpected: after all, {{ic|systemd-cryptsetup-generator}} never decrypted the device which stores {{ic|VolGroup00/LVNAME}}.)<br />
<br />
To work around this, make sure that in kernel boot parameters, {{ic|rd.luks.uuid}} is used to specify the LUKS device containing the root volume, and not just {{ic|luks.uuid}}.<br />
<br />
== See also ==<br />
<br />
* http://www.freedesktop.org/wiki/Software/systemd/systemd-boot/</div>Veoxhttps://wiki.archlinux.org/index.php?title=Talk:Systemd-boot&diff=494181Talk:Systemd-boot2017-10-27T15:35:37Z<p>Veox: /* Move section "Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook" */ new section</p>
<hr />
<div>== About reboot into firmware configuration interface ==<br />
If I remember correctly, isn't that one of the entries that are auto-generated by gummiboot, along with windows entries? [[User:Moviuro|Moviuro]] ([[User talk:Moviuro|talk]]) 12:41, 15 March 2015 (UTC)<br />
: AFAIK, there aren't any autogenerated default options in the Gummiboot boot manager to reboot into the firmware. I only have the autodetected {{ic|efiboomgr}} entry which appears when Windows is installed and the entries I defined manually. -- [[User:wget|wget]] ([[User talk:wget|talk]]) 13:08, 15 March 2015 (UTC)<br />
: I spent some time googleing about this issue. Without success. So it would save time and pain to add some working example. It's a useful feature. --[[User:Cschlote|Cschlote]] ([[User talk:Cschlote|talk]]) 00:26, 16 March 2015 (UTC)<br />
:: http://wstaw.org/w/3gSV/ And I don't have a Reboot into device firmware entry, nor the EFI default loader [[User:Moviuro|Moviuro]] ([[User talk:Moviuro|talk]]) 22:35, 21 March 2015 (UTC)<br />
::: I already found several real machines, which will either show none of these entries, or just one of them. Only a VM shows all entries. --[[User:Cschlote|Cschlote]] ([[User talk:Cschlote|talk]]) 12:32, 23 March 2015 (UTC)<br />
:::: Well, this picture was taken on my Dell Latitude (E6430). [[User:Moviuro|Moviuro]] ([[User talk:Moviuro|talk]]) 15:28, 23 March 2015 (UTC)<br />
<br />
== $esp pseudo-var ==<br />
<br />
I'd suggest replacing {{ic|$esp}} that's prominent in the article with the standard {{ic|/boot}}, i.e. replace<br />
:{{ic|$esp}} is used to denote the mountpoint in this article. <br />
with<br />
:In this article, {{ic|/boot}} is used as the mountpoint.<br />
And replace {{ic|$esp}} instances accordingly. Changing the mountpoint is immediate to anyone who wants to do so, so another pseudo-var isn't required.<br />
<br />
In addition, it's confusing to those who wish to go with the recommended {{ic|/boot}} mountpoint. -- [[User:Alad|Alad]] ([[User talk:Alad|talk]]) 10:33, 5 January 2016 (UTC)<br />
<br />
:While that could make sense, we'd create an inconsistency with all the other boot loader articles, which do use $esp as well: [[GRUB]], [[Syslinux]], [[EFISTUB]] and [[rEFInd]]. Personally, I don't think that using a variable like that is creating confusion, especially after [https://wiki.archlinux.org/index.php?title=Beginners%27_guide&diff=414434&oldid=414433]. — [[User:Kynikos|Kynikos]] ([[User talk:Kynikos|talk]]) 07:12, 9 January 2016 (UTC)<br />
<br />
== Keys inside the boot menu, clarification ==<br />
<br />
once you use keys to change timeout (-,T,+,t), this setting is saved in a non-volatile EFI variable;<br />
in this way `loader.conf` setting is overridden; a) how to clear the non-volatile EFI variable? b) how to show values of non-volatile EFI variables? --[[User:NTia89|nTia89]] ([[User talk:NTia89|talk]]) 14:22, 27 May 2016 (UTC)<br />
<br />
:https://bbs.archlinux.org/viewtopic.php?pid=1733461#p1733461<br />
:a) maybe reflashing the bios ^^ or efivar -w (be carefully you can brick your motherboard) b) also with efivar --name var -p<br />
:{{unsigned|21:13, 31 August 2017|Fallback}}<br />
<br />
== The section about Windows 8+ overriding boot settings is incorrect ==<br />
<br />
I have a dual boot with Windows 10 and it does not override your boot settings or make Windows the default at each boot as explained in the wiki (it might during a major upgrade that work more or less as a new install, like Windows 8 -> Windows 10).<br />
<br />
The fact is that, Windows normally manage the default boot efi file $esp\boot\efi\bootx64.efi and keep it identical to $esp/EFI/Microsoft/Boot/bootmgfw.efi . This file is often updated (I don't know if it is at every boot, but very often). The default installation of systemd-boot put also a copy of itself at the default $esp\boot\efi\bootx64.efi and this can gives a conflict because it will be overwritten by Windows. If we manage to correctly put a default boot entry in the firmware that is not $esp\boot\efi\bootx64.efi, there will be no problems. If we have a motherboard that can only boot $esp\boot\efi\bootx64.efi then and only then we are in trouble and the work around described in the wiki can make sense. It would be safer not touching $esp\boot\efi\bootx64.efi, I think Windows expect we do not touch this file.<br />
<br />
== splash ==<br />
<br />
add a section which talks about splash feature. --[[User:NTia89|nTia89]] ([[User talk:NTia89|talk]]) 09:40, 9 July 2016 (UTC)<br />
<br />
== Move section "Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook" ==<br />
<br />
I've added it here after experiencing the issue when switching {{ic|mkinitcpio}} to use {{ic|systemd}} hooks instead of "regular" ones. I'm not sure how I got the idea that it's a necessary step when using {{ic|systemd-boot}}. It's not: the [[systemd-boot]] page mostly assumes "regular" {{ic|mkinitcpio}} hooks. This section is indeed misplaced.<br />
<br />
Perhaps it makes more sense to move to [[mkinitcpio]], and not [[LVM]]?.. -- [[User:Veox|Veox]] ([[User talk:Veox|talk]]) 15:35, 27 October 2017 (UTC)</div>Veoxhttps://wiki.archlinux.org/index.php?title=Systemd-boot&diff=494180Systemd-boot2017-10-27T15:15:54Z<p>Veox: /* Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook */ Add link to systemd-cryptsetup-generator manpage; plus small text edit.</p>
<hr />
<div>{{lowercase title}}<br />
[[Category:Boot loaders]]<br />
[[de:Gummiboot]]<br />
[[es:Systemd-boot]]<br />
[[ja:Systemd-boot]]<br />
[[ru:Systemd-boot]]<br />
[[zh-hans:Systemd-boot]]<br />
{{Related articles start}}<br />
{{Related|Arch boot process}}<br />
{{Related|Boot loaders}}<br />
{{Related|Secure Boot}}<br />
{{Related|Unified Extensible Firmware Interface}}<br />
{{Related articles end}}<br />
<br />
'''systemd-boot''', previously called '''gummiboot''', is a simple UEFI boot manager which executes configured EFI images. The default entry is selected by a configured pattern (glob) or an on-screen menu. It is included with {{pkg|systemd}}, which is installed on Arch system by default.<br />
<br />
It is simple to configure but it can only start EFI executables such as the Linux kernel [[EFISTUB]], UEFI Shell, GRUB, the Windows Boot Manager.<br />
<br />
== Installation ==<br />
<br />
=== EFI boot ===<br />
<br />
# Make sure you are booted in UEFI mode.<br />
# Verify [[Unified_Extensible_Firmware_Interface#Requirements_for_UEFI_variable_support|your EFI variables are accessible]].<br />
# Mount your [[EFI System Partition]] (ESP) properly. {{ic|''esp''}} is used to denote the mountpoint in this article. {{Note|''systemd-boot'' cannot load EFI binaries from other partitions. It is therefore recommended to mount your ESP to {{ic|/boot}}. In case you want to separate {{ic|/boot}} from the ESP see [[#Manually]] for more information.}}<br />
# If the ESP is '''not''' mounted at {{ic|/boot}}, then copy your kernel and initramfs onto that ESP. {{Note|For a way to automatically keep the kernel updated on the ESP, have a look at [[EFISTUB#Using systemd]] for some systemd units that can be adapted. If your EFI System Partition is using automount, you may need to add {{ic|vfat}} to a file in {{ic|/etc/modules-load.d/}} to ensure the current running kernel has the {{ic|vfat}} module loaded at boot, before any kernel update happens that could replace the module for the currently running version making the mounting of {{ic|/boot/efi}} impossible until reboot.}}<br />
# Type the following command to install ''systemd-boot'': {{bc|1=# bootctl --path=''esp'' install}} It will copy the ''systemd-boot'' binary to your EFI System Partition ({{ic|''esp''/EFI/systemd/systemd-bootx64.efi}} and {{ic|''esp''/EFI/Boot/BOOTX64.EFI}} – both of which are identical – on x86-64 systems) and add ''systemd-boot'' itself as the default EFI application (default boot entry) loaded by the EFI Boot Manager.<br />
# Finally you must [[#Configuration|configure]] the boot loader to function properly.<br />
<br />
=== BIOS boot ===<br />
<br />
{{Warning|This is not recommended.}}<br />
You can successfully install ''systemd-boot'' if booted with in BIOS mode. However, this process requires you to tell firmware to launch ''systemd-boot'''s EFI file at boot, usually via two ways:<br />
<br />
* you have a working EFI Shell somewhere else.<br />
<br />
* your firmware interface provides a way of properly setting the EFI file that needs to be loaded at boot time.<br />
<br />
If you can do it, the installation is easier: go into your EFI Shell or your firmware configuration interface and change your machine's default EFI file to {{ic|''esp''/EFI/systemd/systemd-bootx64.efi}} ( or {{ic|systemd-bootia32.efi}} depending if your system firmware is 32 bit).<br />
<br />
{{Note|the firmware interface of Dell Latitude series provides everything you need to setup EFI boot but the EFI Shell won't be able to write to the computer's ROM.}}<br />
<br />
=== Updating ===<br />
<br />
Unlike the previous separate ''gummiboot'' package, which updated automatically on a new package release with a {{ic|post_install}} script, updates of new ''systemd-boot'' versions must now be done manually by the user. However the procedure can be automated using pacman hooks.<br />
<br />
==== Manually ====<br />
<br />
''systemd-boot'' ({{man|1|bootctl}}) assumes that your EFI System Partition is mounted on {{ic|/boot}}.<br />
<br />
# bootctl update<br />
<br />
If the ESP is not mounted on {{ic|/boot}}, the {{ic|1=--path=}} option can pass it. For example: <br />
<br />
# bootctl --path=''esp'' update<br />
<br />
{{Note|This is also the command to use when migrating from ''gummiboot'', before removing that package. If that package has already been removed, however, run {{ic|1=bootctl --path=''esp'' install}}.}}<br />
<br />
==== Automatically ====<br />
<br />
The [[AUR]] package {{AUR|systemd-boot-pacman-hook}} provides a [[Pacman#Hooks|Pacman hook]] to automate the update process. [[Install|Installing]] the package will add a hook which will be executed every time the {{Pkg|systemd}} package is upgraded.<br />
<br />
Alternatively, place the following pacman hook in the {{ic|/etc/pacman.d/hooks/}} directory:<br />
<br />
{{hc|/etc/pacman.d/hooks/systemd-boot.hook|2=<br />
[Trigger]<br />
Type = Package<br />
Operation = Upgrade<br />
Target = systemd<br />
<br />
[Action]<br />
Description = Updating systemd-boot...<br />
When = PostTransaction<br />
Exec = /usr/bin/bootctl update<br />
}}<br />
<br />
== Configuration ==<br />
<br />
=== Basic configuration ===<br />
<br />
The basic configuration is stored in {{ic|''esp''/loader/loader.conf}} file and it is composed by three options:<br />
<br />
* {{ic|default}} – default entry to select (without the {{ic|.conf}} suffix); can be a wildcard like {{ic|arch-*}}.<br />
<br />
* {{ic|timeout}} – menu timeout in seconds. If this is not set, the menu will only be shown on key press during boot.<br />
<br />
* {{ic|editor}} – whether to enable the kernel parameters editor or not. {{ic|1}} (default) is enabled, {{ic|0}} is disabled; since the user can add {{ic|1=init=/bin/bash}} to bypass root password and gain root access, it is strongly recommended to set this option to {{ic|0}}.<br />
<br />
Example:<br />
<br />
{{hc|''esp''/loader/loader.conf|<br />
default arch<br />
timeout 4<br />
editor 0<br />
}}<br />
<br />
{{Note|The first 2 options can be changed in the boot menu itself and changes will be stored as EFI variables.}}<br />
<br />
{{Tip|A basic configuration file example is located at {{ic|/usr/share/systemd/bootctl/loader.conf}}.}}<br />
<br />
=== Adding boot entries ===<br />
<br />
{{Note|<br />
* ''bootctl'' will automatically check for "'''Windows Boot Manager'''" ({{ic|\EFI\Microsoft\Boot\Bootmgfw.efi}}), "'''EFI Shell'''" ({{ic|\shellx64.efi}}) and "'''EFI Default Loader'''" ({{ic|\EFI\Boot\bootx64.efi}}) at boot time, as well as specially prepared kernel files found in {{ic|\EFI\Linux}}. When detected, corresponding entries with titles {{ic|auto-windows}}, {{ic|auto-efi-shell}} and {{ic|auto-efi-default}}, respectively, will be automatically generated. These entries do not require manual loader configuration. However, it does not auto-detect other EFI applications (unlike [[rEFInd]]), so for booting the Linux kernel, manual configuration entries must be created.<br />
<br />
* If you dual-boot Windows, it is strongly recommended to disable its default [[Dual boot with Windows#Fast_Start-Up|Fast Start-Up]] option.<br />
* Remember to load the intel [[microcode]] with {{ic|initrd}} if applicable.<br />
* You can find the {{ic|PARTUUID}} for your root partition with the command {{ic|1=blkid -s PARTUUID -o value /dev/sd''xY''}}, where {{ic|''x''}} is the device letter and {{ic|''Y''}} is the partition number. This is required only for your root partition, not {{ic|''esp''}}.}}<br />
<br />
''bootctl'' searches for boot menu items in {{ic|''esp''/loader/entries/*.conf}} – each file found must contain exactly one boot entry. The possible options are:<br />
<br />
* {{ic|title}} – operating system name. '''Required.'''<br />
<br />
* {{ic|version}} – kernel version, shown only when multiple entries with same title exist. Optional.<br />
<br />
* {{ic|machine-id}} – machine identifier from {{ic|/etc/machine-id}}, shown only when multiple entries with same title and version exist. Optional.<br />
<br />
* {{ic|efi}} – EFI program to start, relative to your ESP ({{ic|''esp''}}); e.g. {{ic|/vmlinuz-linux}}. Either this or {{ic|linux}} (see below) is '''required.'''<br />
<br />
* {{ic|options}} – command line options to pass to the EFI program or kernel boot parameters. Optional, but you will need at least {{ic|1=initrd=''efipath''}} and {{ic|1=root=''dev''}} if booting Linux.<br />
<br />
For Linux, you can specify {{ic|linux ''path-to-vmlinuz''}} and {{ic|initrd ''path-to-initramfs''}}; this will be automatically translated to {{ic|efi ''path''}} and {{ic|1=options initrd=''path''}} – this syntax is only supported for convenience and has no differences in function.<br />
<br />
{{Style|There shouldn't be so many examples for specifying mount options or [[kernel parameters]].}}<br />
<br />
==== Standard root installations ====<br />
<br />
Here is an example entry for a root partition without LVM or LUKS:<br />
<br />
{{hc|''esp''/loader/entries/arch.conf|2=<br />
title Arch Linux<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=PARTUUID=14420948-2cea-4de7-b042-40f67c618660 rw<br />
}}<br />
<br />
Please note in the example above that {{ic|PARTUUID}}/{{ic|PARTLABEL}} identifies a GPT partition, and differs from {{ic|UUID}}/{{ic|LABEL}}, which identifies a filesystem. Using the {{ic|PARTUUID}}/{{ic|PARTLABEL}} is advantageous because it is invariant (i.e. unchanging) if you reformat the partition with another filesystem, or if the {{ic|/dev/sd* }}mapping changed for some reason. It is also useful if you do not have a filesystem on the partition (or use LUKS, which does not support {{ic|LABEL}}s).<br />
<br />
{{Tip|An example entry file is located at {{ic|/usr/share/systemd/bootctl}}.}}<br />
<br />
==== LVM root installations ====<br />
<br />
{{Warning|''systemd-boot'' cannot be used without a separate {{ic|/boot}} filesystem outside of LVM.}}<br />
<br />
Here is an example for a root partition using [[LVM|Logical Volume Management]]:<br />
<br />
{{hc|''esp''/loader/entries/arch-lvm.conf|2=<br />
title Arch Linux (LVM)<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=/dev/mapper/<VolumeGroup-LogicalVolume> rw<br />
}}<br />
<br />
Replace {{ic|<VolumeGroup-LogicalVolume>}} with the actual VG and LV names (e.g. {{ic|1=root=/dev/mapper/volgroup00-lvolroot}}). Alternatively, it is also possible to use a UUID instead:<br />
....<br />
options root=UUID=<UUID identifier> rw<br />
<br />
Note that {{ic|1=root='''UUID'''=}} is used instead of {{ic|1=root='''PARTUUID'''=}}, which is used for Root partitions without LVM or LUKS.<br />
<br />
==== Encrypted Root Installations ====<br />
<br />
Here is an example configuration file for an encrypted root partition ([[Dm-crypt|DM-Crypt / LUKS]]) using the {{ic|encrypt}} [[mkinitcpio]] hook:<br />
<br />
{{hc|''esp''/loader/entries/arch-encrypted.conf|2=<br />
title Arch Linux Encrypted<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options cryptdevice=UUID=<UUID>:<mapped-name> root=/dev/mapper/<mapped-name> quiet rw<br />
}}<br />
<br />
UUID is used in this example; {{ic|PARTUUID}} should be able to replace the UUID, if so desired. You may also replace the {{ic|/dev}} path with a regular UUID. {{ic|mapped-name}} is whatever you want it to be called. See [[Dm-crypt/System configuration#Boot loader]].<br />
<br />
If you are using LVM, your cryptdevice line will look like this:<br />
<br />
{{hc|''esp''/loader/entries/arch-encrypted-lvm.conf|2=<br />
title Arch Linux Encrypted LVM<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options cryptdevice=UUID=<UUID>:MyVolGroup root=/dev/mapper/MyVolGroup-MyVolRoot quiet rw<br />
}}<br />
<br />
You can also add other EFI programs such as {{ic|\EFI\arch\grub.efi}}.<br />
<br />
==== btrfs subvolume root installations ====<br />
<br />
If booting a [[btrfs]] subvolume as root, amend the {{ic|options}} line with {{ic|rootflags<nowiki>=</nowiki>subvol<nowiki>=</nowiki><root subvolume>}}. In the example below, root has been mounted as a btrfs subvolume called 'ROOT' (e.g. {{ic|mount -o subvol<nowiki>=</nowiki>ROOT /dev/sdxY /mnt}}):<br />
<br />
{{hc|''esp''/loader/entries/arch-btrfs-subvol.conf|2=<br />
title Arch Linux<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=PARTUUID=14420948-2cea-4de7-b042-40f67c618660 rw rootflags<nowiki>=</nowiki>subvol<nowiki>=</nowiki>ROOT<br />
}}<br />
<br />
A failure to do so will otherwise result in the following error message: {{ic|ERROR: Root device mounted successfully, but /sbin/init does not exist.}}<br />
<br />
==== ZFS root installations ====<br />
<br />
When booting from a [[ZFS]] dataset, add {{ic|zfs<nowiki>=</nowiki><root dataset>}} to the {{ic|options}} line. Here the root dataset has been set to 'zroot/ROOT/default':<br />
<br />
{{hc|''esp''/loader/entries/arch-zfs.conf|2=<br />
title Arch Linux ZFS<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options zfs=zroot/ROOT/default rw<br />
}}<br />
<br />
When booting off of a ZFS dataset ensure that it has had the {{ic|bootfs}} property set with {{ic| zpool set bootfs<nowiki>=</nowiki><root dataset> <zpool>}}.<br />
<br />
==== EFI Shells or other EFI apps ====<br />
<br />
In case you installed EFI shells and other EFI application into the ESP, you can use the following snippets:<br />
<br />
{{hc|''esp''/loader/entries/uefi-shell-v1-x86_64.conf|2=<br />
title UEFI Shell x86_64 v1<br />
efi /EFI/shellx64_v1.efi<br />
}}<br />
<br />
{{hc|''esp''/loader/entries/uefi-shell-v2-x86_64.conf|2=<br />
title UEFI Shell x86_64 v2<br />
efi /EFI/shellx64_v2.efi<br />
}}<br />
<br />
{{Expansion|Add example on how to boot into EFI firmware setup.}}<br />
<br />
=== Preparing kernels for EFI\Linux ===<br />
<br />
{{Style|Does not belong here, not specific to systemd-boot.}}<br />
<br />
''EFI\Linux'' is searched for specially prepared kernel files, which bundle the kernel, the initrd, the kernel command line and /etc/os-release into one file. This file can be easily signed for secure boot.<br />
<br />
Create the bundle file like this:<br />
<br />
{{hc|Kernel packaging command:|<nowiki>objcopy \<br />
--add-section .osrel="/usr/lib/os-release" --change-section-vma .osrel=0x20000 \<br />
--add-section .cmdline="kernel command line" --change-section-vma .cmdline=0x30000 \<br />
--add-section .linux="vmlinuz-file" --change-section-vma .linux=0x40000 \<br />
--add-section .initrd="initrd-file" --change-section-vma .initrd=0x3000000 \<br />
"/usr/lib/systemd/boot/efi/linuxx64.efi.stub" "linux.efi"</nowiki>}}<br />
<br />
Optionally sign ''linux.efi'' now (e.g. using ''sbsigntools'' from AUR).<br />
<br />
Copying ''linux.efi'' into ''{{ic|''esp''\EFI\Linux}}''.<br />
<br />
=== Support hibernation ===<br />
<br />
See [[Suspend and hibernate]].<br />
<br />
=== Kernel parameters editor with password protection ===<br />
<br />
Alternatively you can install {{AUR|systemd-boot-password}} which supports {{ic|password}} basic configuration option. Use {{ic|sbpctl generate}} to generate a value for this option.<br />
<br />
Install ''systemd-boot-password'' with the following command:<br />
<br />
{{bc|1=# sbpctl install ''esp''}}<br />
<br />
With enabled editor you will be prompted for your password before you can edit kernel parameters.<br />
<br />
== Keys inside the boot menu ==<br />
<br />
The following keys are used inside the menu:<br />
* {{ic|Up/Down}} - select entry<br />
* {{ic|Enter}} - boot the selected entry<br />
* {{ic|d}} - select the default entry to boot (stored in a non-volatile EFI variable)<br />
* {{ic|-/T}} - decrease the timeout (stored in a non-volatile EFI variable)<br />
* {{ic|+/t}} - increase the timeout (stored in a non-volatile EFI variable)<br />
* {{ic|e}} - edit the kernel command line. It has no effect if the {{ic|editor}} config option is set to {{ic|0}}.<br />
* {{ic|v}} - show the gummiboot and UEFI version<br />
* {{ic|Q}} - quit<br />
* {{ic|P}} - print the current configuration<br />
* {{ic|h/?}} - help<br />
<br />
These hotkeys will, when pressed inside the menu or during bootup, directly boot<br />
a specific entry:<br />
<br />
* {{ic|l}} - Linux<br />
* {{ic|w}} - Windows<br />
* {{ic|a}} - OS X<br />
* {{ic|s}} - EFI Shell<br />
* {{ic|1-9}} - number of entry<br />
<br />
== Troubleshooting ==<br />
<br />
=== Manual entry using efibootmgr ===<br />
<br />
If {{ic|bootctl install}} command failed, you can create a EFI boot entry manually using {{Pkg|efibootmgr}}:<br />
<br />
# efibootmgr -c -d /dev/sdX -p Y -l /EFI/systemd/systemd-bootx64.efi -L "Linux Boot Manager"<br />
<br />
where {{ic|/dev/sdXY}} is the [[EFI System Partition]].<br />
<br />
=== Menu does not appear after Windows upgrade ===<br />
<br />
See [[UEFI#Windows changes boot order]].<br />
<br />
=== Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook ===<br />
<br />
{{Move|LVM|How is this related to ''systemd-boot''?}}<br />
<br />
Even if the drives are listed in {{ic|/etc/crypttab}}, {{ic|systemd-cryptsetup-generator}} may choose to skip them. From {{man|8|systemd-cryptsetup-generator}}:<br />
<br />
If /etc/crypttab exists, only those UUIDs specified on the kernel<br />
command line will be activated in the initrd or the real root.<br />
<br />
This may make the boot process fail, due to LVM's inability to locate the physical volumes. The latter manifests as timeout errors:<br />
<br />
Timed out waiting for device dev-mapper-VolGroup00\x2dLVNAME.device.<br />
<br />
(This is not unexpected: after all, {{ic|systemd-cryptsetup-generator}} never decrypted the device which stores {{ic|VolGroup00/LVNAME}}.)<br />
<br />
To work around this, make sure that in kernel boot parameters, {{ic|rd.luks.uuid}} is used to specify the LUKS device containing the root volume, and not just {{ic|luks.uuid}}.<br />
<br />
== See also ==<br />
<br />
* http://www.freedesktop.org/wiki/Software/systemd/systemd-boot/</div>Veoxhttps://wiki.archlinux.org/index.php?title=Systemd-boot&diff=494115Systemd-boot2017-10-26T17:19:11Z<p>Veox: /* Non-root LUKS-encrypted drives are not decrypted when using sd-lvm2 mkinitcpio hook, even if properly listed in /etc/crypttab */ Shorten title, add man-page quote, clarify a bit.</p>
<hr />
<div>{{lowercase title}}<br />
[[Category:Boot loaders]]<br />
[[de:Gummiboot]]<br />
[[es:Systemd-boot]]<br />
[[ja:Systemd-boot]]<br />
[[ru:Systemd-boot]]<br />
[[zh-hans:Systemd-boot]]<br />
{{Related articles start}}<br />
{{Related|Arch boot process}}<br />
{{Related|Boot loaders}}<br />
{{Related|Secure Boot}}<br />
{{Related|Unified Extensible Firmware Interface}}<br />
{{Related articles end}}<br />
<br />
'''systemd-boot''', previously called '''gummiboot''', is a simple UEFI boot manager which executes configured EFI images. The default entry is selected by a configured pattern (glob) or an on-screen menu. It is included with {{pkg|systemd}}, which is installed on Arch system by default.<br />
<br />
It is simple to configure but it can only start EFI executables such as the Linux kernel [[EFISTUB]], UEFI Shell, GRUB, the Windows Boot Manager.<br />
<br />
== Installation ==<br />
<br />
=== EFI boot ===<br />
<br />
# Make sure you are booted in UEFI mode.<br />
# Verify [[Unified_Extensible_Firmware_Interface#Requirements_for_UEFI_variable_support|your EFI variables are accessible]].<br />
# Mount your [[EFI System Partition]] (ESP) properly. {{ic|''esp''}} is used to denote the mountpoint in this article. {{Note|''systemd-boot'' cannot load EFI binaries from other partitions. It is therefore recommended to mount your ESP to {{ic|/boot}}. In case you want to separate {{ic|/boot}} from the ESP see [[#Manually]] for more information.}}<br />
# If the ESP is '''not''' mounted at {{ic|/boot}}, then copy your kernel and initramfs onto that ESP. {{Note|For a way to automatically keep the kernel updated on the ESP, have a look at [[EFISTUB#Using systemd]] for some systemd units that can be adapted. If your EFI System Partition is using automount, you may need to add {{ic|vfat}} to a file in {{ic|/etc/modules-load.d/}} to ensure the current running kernel has the {{ic|vfat}} module loaded at boot, before any kernel update happens that could replace the module for the currently running version making the mounting of {{ic|/boot/efi}} impossible until reboot.}}<br />
# Type the following command to install ''systemd-boot'': {{bc|1=# bootctl --path=''esp'' install}} It will copy the ''systemd-boot'' binary to your EFI System Partition ({{ic|''esp''/EFI/systemd/systemd-bootx64.efi}} and {{ic|''esp''/EFI/Boot/BOOTX64.EFI}} – both of which are identical – on x86-64 systems) and add ''systemd-boot'' itself as the default EFI application (default boot entry) loaded by the EFI Boot Manager.<br />
# Finally you must [[#Configuration|configure]] the boot loader to function properly.<br />
<br />
=== BIOS boot ===<br />
<br />
{{Warning|This is not recommended.}}<br />
You can successfully install ''systemd-boot'' if booted with in BIOS mode. However, this process requires you to tell firmware to launch ''systemd-boot'''s EFI file at boot, usually via two ways:<br />
<br />
* you have a working EFI Shell somewhere else.<br />
<br />
* your firmware interface provides a way of properly setting the EFI file that needs to be loaded at boot time.<br />
<br />
If you can do it, the installation is easier: go into your EFI Shell or your firmware configuration interface and change your machine's default EFI file to {{ic|''esp''/EFI/systemd/systemd-bootx64.efi}} ( or {{ic|systemd-bootia32.efi}} depending if your system firmware is 32 bit).<br />
<br />
{{Note|the firmware interface of Dell Latitude series provides everything you need to setup EFI boot but the EFI Shell won't be able to write to the computer's ROM.}}<br />
<br />
=== Updating ===<br />
<br />
Unlike the previous separate ''gummiboot'' package, which updated automatically on a new package release with a {{ic|post_install}} script, updates of new ''systemd-boot'' versions must now be done manually by the user. However the procedure can be automated using pacman hooks.<br />
<br />
==== Manually ====<br />
<br />
''systemd-boot'' ({{man|1|bootctl}}) assumes that your EFI System Partition is mounted on {{ic|/boot}}.<br />
<br />
# bootctl update<br />
<br />
If the ESP is not mounted on {{ic|/boot}}, the {{ic|1=--path=}} option can pass it. For example: <br />
<br />
# bootctl --path=''esp'' update<br />
<br />
{{Note|This is also the command to use when migrating from ''gummiboot'', before removing that package. If that package has already been removed, however, run {{ic|1=bootctl --path=''esp'' install}}.}}<br />
<br />
==== Automatically ====<br />
<br />
The [[AUR]] package {{AUR|systemd-boot-pacman-hook}} provides a [[Pacman#Hooks|Pacman hook]] to automate the update process. [[Install|Installing]] the package will add a hook which will be executed every time the {{Pkg|systemd}} package is upgraded.<br />
<br />
Alternatively, place the following pacman hook in the {{ic|/etc/pacman.d/hooks/}} directory:<br />
<br />
{{hc|/etc/pacman.d/hooks/systemd-boot.hook|2=<br />
[Trigger]<br />
Type = Package<br />
Operation = Upgrade<br />
Target = systemd<br />
<br />
[Action]<br />
Description = Updating systemd-boot...<br />
When = PostTransaction<br />
Exec = /usr/bin/bootctl update<br />
}}<br />
<br />
== Configuration ==<br />
<br />
=== Basic configuration ===<br />
<br />
The basic configuration is stored in {{ic|''esp''/loader/loader.conf}} file and it is composed by three options:<br />
<br />
* {{ic|default}} – default entry to select (without the {{ic|.conf}} suffix); can be a wildcard like {{ic|arch-*}}.<br />
<br />
* {{ic|timeout}} – menu timeout in seconds. If this is not set, the menu will only be shown on key press during boot.<br />
<br />
* {{ic|editor}} – whether to enable the kernel parameters editor or not. {{ic|1}} (default) is enabled, {{ic|0}} is disabled; since the user can add {{ic|1=init=/bin/bash}} to bypass root password and gain root access, it is strongly recommended to set this option to {{ic|0}}.<br />
<br />
Example:<br />
<br />
{{hc|''esp''/loader/loader.conf|<br />
default arch<br />
timeout 4<br />
editor 0<br />
}}<br />
<br />
{{Note|The first 2 options can be changed in the boot menu itself and changes will be stored as EFI variables.}}<br />
<br />
{{Tip|A basic configuration file example is located at {{ic|/usr/share/systemd/bootctl/loader.conf}}.}}<br />
<br />
=== Adding boot entries ===<br />
<br />
{{Note|<br />
* ''bootctl'' will automatically check for "'''Windows Boot Manager'''" ({{ic|\EFI\Microsoft\Boot\Bootmgfw.efi}}), "'''EFI Shell'''" ({{ic|\shellx64.efi}}) and "'''EFI Default Loader'''" ({{ic|\EFI\Boot\bootx64.efi}}) at boot time, as well as specially prepared kernel files found in {{ic|\EFI\Linux}}. When detected, corresponding entries with titles {{ic|auto-windows}}, {{ic|auto-efi-shell}} and {{ic|auto-efi-default}}, respectively, will be automatically generated. These entries do not require manual loader configuration. However, it does not auto-detect other EFI applications (unlike [[rEFInd]]), so for booting the Linux kernel, manual configuration entries must be created.<br />
<br />
* If you dual-boot Windows, it is strongly recommended to disable its default [[Dual boot with Windows#Fast_Start-Up|Fast Start-Up]] option.<br />
* Remember to load the intel [[microcode]] with {{ic|initrd}} if applicable.<br />
* You can find the {{ic|PARTUUID}} for your root partition with the command {{ic|1=blkid -s PARTUUID -o value /dev/sd''xY''}}, where {{ic|''x''}} is the device letter and {{ic|''Y''}} is the partition number. This is required only for your root partition, not {{ic|''esp''}}.}}<br />
<br />
''bootctl'' searches for boot menu items in {{ic|''esp''/loader/entries/*.conf}} – each file found must contain exactly one boot entry. The possible options are:<br />
<br />
* {{ic|title}} – operating system name. '''Required.'''<br />
<br />
* {{ic|version}} – kernel version, shown only when multiple entries with same title exist. Optional.<br />
<br />
* {{ic|machine-id}} – machine identifier from {{ic|/etc/machine-id}}, shown only when multiple entries with same title and version exist. Optional.<br />
<br />
* {{ic|efi}} – EFI program to start, relative to your ESP ({{ic|''esp''}}); e.g. {{ic|/vmlinuz-linux}}. Either this or {{ic|linux}} (see below) is '''required.'''<br />
<br />
* {{ic|options}} – command line options to pass to the EFI program or kernel boot parameters. Optional, but you will need at least {{ic|1=initrd=''efipath''}} and {{ic|1=root=''dev''}} if booting Linux.<br />
<br />
For Linux, you can specify {{ic|linux ''path-to-vmlinuz''}} and {{ic|initrd ''path-to-initramfs''}}; this will be automatically translated to {{ic|efi ''path''}} and {{ic|1=options initrd=''path''}} – this syntax is only supported for convenience and has no differences in function.<br />
<br />
{{Style|There shouldn't be so many examples for specifying mount options or [[kernel parameters]].}}<br />
<br />
==== Standard root installations ====<br />
<br />
Here is an example entry for a root partition without LVM or LUKS:<br />
<br />
{{hc|''esp''/loader/entries/arch.conf|2=<br />
title Arch Linux<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=PARTUUID=14420948-2cea-4de7-b042-40f67c618660 rw<br />
}}<br />
<br />
Please note in the example above that {{ic|PARTUUID}}/{{ic|PARTLABEL}} identifies a GPT partition, and differs from {{ic|UUID}}/{{ic|LABEL}}, which identifies a filesystem. Using the {{ic|PARTUUID}}/{{ic|PARTLABEL}} is advantageous because it is invariant (i.e. unchanging) if you reformat the partition with another filesystem, or if the {{ic|/dev/sd* }}mapping changed for some reason. It is also useful if you do not have a filesystem on the partition (or use LUKS, which does not support {{ic|LABEL}}s).<br />
<br />
{{Tip|An example entry file is located at {{ic|/usr/share/systemd/bootctl}}.}}<br />
<br />
==== LVM root installations ====<br />
<br />
{{Warning|''systemd-boot'' cannot be used without a separate {{ic|/boot}} filesystem outside of LVM.}}<br />
<br />
Here is an example for a root partition using [[LVM|Logical Volume Management]]:<br />
<br />
{{hc|''esp''/loader/entries/arch-lvm.conf|2=<br />
title Arch Linux (LVM)<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=/dev/mapper/<VolumeGroup-LogicalVolume> rw<br />
}}<br />
<br />
Replace {{ic|<VolumeGroup-LogicalVolume>}} with the actual VG and LV names (e.g. {{ic|1=root=/dev/mapper/volgroup00-lvolroot}}). Alternatively, it is also possible to use a UUID instead:<br />
....<br />
options root=UUID=<UUID identifier> rw<br />
<br />
Note that {{ic|1=root='''UUID'''=}} is used instead of {{ic|1=root='''PARTUUID'''=}}, which is used for Root partitions without LVM or LUKS.<br />
<br />
==== Encrypted Root Installations ====<br />
<br />
Here is an example configuration file for an encrypted root partition ([[Dm-crypt|DM-Crypt / LUKS]]) using the {{ic|encrypt}} [[mkinitcpio]] hook:<br />
<br />
{{hc|''esp''/loader/entries/arch-encrypted.conf|2=<br />
title Arch Linux Encrypted<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options cryptdevice=UUID=<UUID>:<mapped-name> root=/dev/mapper/<mapped-name> quiet rw<br />
}}<br />
<br />
UUID is used in this example; {{ic|PARTUUID}} should be able to replace the UUID, if so desired. You may also replace the {{ic|/dev}} path with a regular UUID. {{ic|mapped-name}} is whatever you want it to be called. See [[Dm-crypt/System configuration#Boot loader]].<br />
<br />
If you are using LVM, your cryptdevice line will look like this:<br />
<br />
{{hc|''esp''/loader/entries/arch-encrypted-lvm.conf|2=<br />
title Arch Linux Encrypted LVM<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options cryptdevice=UUID=<UUID>:MyVolGroup root=/dev/mapper/MyVolGroup-MyVolRoot quiet rw<br />
}}<br />
<br />
You can also add other EFI programs such as {{ic|\EFI\arch\grub.efi}}.<br />
<br />
==== btrfs subvolume root installations ====<br />
<br />
If booting a [[btrfs]] subvolume as root, amend the {{ic|options}} line with {{ic|rootflags<nowiki>=</nowiki>subvol<nowiki>=</nowiki><root subvolume>}}. In the example below, root has been mounted as a btrfs subvolume called 'ROOT' (e.g. {{ic|mount -o subvol<nowiki>=</nowiki>ROOT /dev/sdxY /mnt}}):<br />
<br />
{{hc|''esp''/loader/entries/arch-btrfs-subvol.conf|2=<br />
title Arch Linux<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=PARTUUID=14420948-2cea-4de7-b042-40f67c618660 rw rootflags<nowiki>=</nowiki>subvol<nowiki>=</nowiki>ROOT<br />
}}<br />
<br />
A failure to do so will otherwise result in the following error message: {{ic|ERROR: Root device mounted successfully, but /sbin/init does not exist.}}<br />
<br />
==== ZFS root installations ====<br />
<br />
When booting from a [[ZFS]] dataset, add {{ic|zfs<nowiki>=</nowiki><root dataset>}} to the {{ic|options}} line. Here the root dataset has been set to 'zroot/ROOT/default':<br />
<br />
{{hc|''esp''/loader/entries/arch-zfs.conf|2=<br />
title Arch Linux ZFS<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options zfs=zroot/ROOT/default rw<br />
}}<br />
<br />
When booting off of a ZFS dataset ensure that it has had the {{ic|bootfs}} property set with {{ic| zpool set bootfs<nowiki>=</nowiki><root dataset> <zpool>}}.<br />
<br />
==== EFI Shells or other EFI apps ====<br />
<br />
In case you installed EFI shells and other EFI application into the ESP, you can use the following snippets:<br />
<br />
{{hc|''esp''/loader/entries/uefi-shell-v1-x86_64.conf|2=<br />
title UEFI Shell x86_64 v1<br />
efi /EFI/shellx64_v1.efi<br />
}}<br />
<br />
{{hc|''esp''/loader/entries/uefi-shell-v2-x86_64.conf|2=<br />
title UEFI Shell x86_64 v2<br />
efi /EFI/shellx64_v2.efi<br />
}}<br />
<br />
{{Expansion|Add example on how to boot into EFI firmware setup.}}<br />
<br />
=== Preparing kernels for EFI\Linux ===<br />
<br />
{{Style|Does not belong here, not specific to systemd-boot.}}<br />
<br />
''EFI\Linux'' is searched for specially prepared kernel files, which bundle the kernel, the initrd, the kernel command line and /etc/os-release into one file. This file can be easily signed for secure boot.<br />
<br />
Create the bundle file like this:<br />
<br />
{{hc|Kernel packaging command:|<nowiki>objcopy \<br />
--add-section .osrel="/usr/lib/os-release" --change-section-vma .osrel=0x20000 \<br />
--add-section .cmdline="kernel command line" --change-section-vma .cmdline=0x30000 \<br />
--add-section .linux="vmlinuz-file" --change-section-vma .linux=0x40000 \<br />
--add-section .initrd="initrd-file" --change-section-vma .initrd=0x3000000 \<br />
"/usr/lib/systemd/boot/efi/linuxx64.efi.stub" "linux.efi"</nowiki>}}<br />
<br />
Optionally sign ''linux.efi'' now (e.g. using ''sbsigntools'' from AUR).<br />
<br />
Copying ''linux.efi'' into ''{{ic|''esp''\EFI\Linux}}''.<br />
<br />
=== Support hibernation ===<br />
<br />
See [[Suspend and hibernate]].<br />
<br />
=== Kernel parameters editor with password protection ===<br />
<br />
Alternatively you can install {{AUR|systemd-boot-password}} which supports {{ic|password}} basic configuration option. Use {{ic|sbpctl generate}} to generate a value for this option.<br />
<br />
Install ''systemd-boot-password'' with the following command:<br />
<br />
{{bc|1=# sbpctl install ''esp''}}<br />
<br />
With enabled editor you will be prompted for your password before you can edit kernel parameters.<br />
<br />
== Keys inside the boot menu ==<br />
<br />
The following keys are used inside the menu:<br />
* {{ic|Up/Down}} - select entry<br />
* {{ic|Enter}} - boot the selected entry<br />
* {{ic|d}} - select the default entry to boot (stored in a non-volatile EFI variable)<br />
* {{ic|-/T}} - decrease the timeout (stored in a non-volatile EFI variable)<br />
* {{ic|+/t}} - increase the timeout (stored in a non-volatile EFI variable)<br />
* {{ic|e}} - edit the kernel command line. It has no effect if the {{ic|editor}} config option is set to {{ic|0}}.<br />
* {{ic|v}} - show the gummiboot and UEFI version<br />
* {{ic|Q}} - quit<br />
* {{ic|P}} - print the current configuration<br />
* {{ic|h/?}} - help<br />
<br />
These hotkeys will, when pressed inside the menu or during bootup, directly boot<br />
a specific entry:<br />
<br />
* {{ic|l}} - Linux<br />
* {{ic|w}} - Windows<br />
* {{ic|a}} - OS X<br />
* {{ic|s}} - EFI Shell<br />
* {{ic|1-9}} - number of entry<br />
<br />
== Troubleshooting ==<br />
<br />
=== Manual entry using efibootmgr ===<br />
<br />
If {{ic|bootctl install}} command failed, you can create a EFI boot entry manually using {{Pkg|efibootmgr}}:<br />
<br />
# efibootmgr -c -d /dev/sdX -p Y -l /EFI/systemd/systemd-bootx64.efi -L "Linux Boot Manager"<br />
<br />
where {{ic|/dev/sdXY}} is the [[EFI System Partition]].<br />
<br />
=== Menu does not appear after Windows upgrade ===<br />
<br />
See [[UEFI#Windows changes boot order]].<br />
<br />
=== Non-root drives are not decrypted by sd-lvm2 mkinitcpio hook ===<br />
<br />
Even if the drives are listed in {{ic|/etc/crypttab}}, {{ic|systemd-cryptsetup-generator}} may choose to skip them. From the manpage:<br />
<br />
If /etc/crypttab exists, only those UUIDs specified on the kernel<br />
command line will be activated in the initrd or the real root.<br />
<br />
This may make the boot process fail, due to LVM's inability to locate the physical volumes. The latter manifests as timeout errors:<br />
<br />
Timed out waiting for device dev-mapper-VolGroup00\x2dLVNAME.device.<br />
<br />
(This is not unexpected if {{ic|systemd-cryptsetup-generator}} never decrypted the device which stores {{ic|VolGroup00/LVNAME}}.)<br />
<br />
To work around this, make sure that in kernel boot parameters, {{ic|rd.luks.uuid}} is used to specify the LUKS device containing the root volume, and not just {{ic|luks.uuid}}.<br />
<br />
== See also ==<br />
<br />
* http://www.freedesktop.org/wiki/Software/systemd/systemd-boot/</div>Veoxhttps://wiki.archlinux.org/index.php?title=Systemd-boot&diff=494114Systemd-boot2017-10-26T17:11:39Z<p>Veox: /* Troubleshooting */ Sloppy doc on systemd-cryptsetup-generator "skipping" /etc/crypttab.</p>
<hr />
<div>{{lowercase title}}<br />
[[Category:Boot loaders]]<br />
[[de:Gummiboot]]<br />
[[es:Systemd-boot]]<br />
[[ja:Systemd-boot]]<br />
[[ru:Systemd-boot]]<br />
[[zh-hans:Systemd-boot]]<br />
{{Related articles start}}<br />
{{Related|Arch boot process}}<br />
{{Related|Boot loaders}}<br />
{{Related|Secure Boot}}<br />
{{Related|Unified Extensible Firmware Interface}}<br />
{{Related articles end}}<br />
<br />
'''systemd-boot''', previously called '''gummiboot''', is a simple UEFI boot manager which executes configured EFI images. The default entry is selected by a configured pattern (glob) or an on-screen menu. It is included with {{pkg|systemd}}, which is installed on Arch system by default.<br />
<br />
It is simple to configure but it can only start EFI executables such as the Linux kernel [[EFISTUB]], UEFI Shell, GRUB, the Windows Boot Manager.<br />
<br />
== Installation ==<br />
<br />
=== EFI boot ===<br />
<br />
# Make sure you are booted in UEFI mode.<br />
# Verify [[Unified_Extensible_Firmware_Interface#Requirements_for_UEFI_variable_support|your EFI variables are accessible]].<br />
# Mount your [[EFI System Partition]] (ESP) properly. {{ic|''esp''}} is used to denote the mountpoint in this article. {{Note|''systemd-boot'' cannot load EFI binaries from other partitions. It is therefore recommended to mount your ESP to {{ic|/boot}}. In case you want to separate {{ic|/boot}} from the ESP see [[#Manually]] for more information.}}<br />
# If the ESP is '''not''' mounted at {{ic|/boot}}, then copy your kernel and initramfs onto that ESP. {{Note|For a way to automatically keep the kernel updated on the ESP, have a look at [[EFISTUB#Using systemd]] for some systemd units that can be adapted. If your EFI System Partition is using automount, you may need to add {{ic|vfat}} to a file in {{ic|/etc/modules-load.d/}} to ensure the current running kernel has the {{ic|vfat}} module loaded at boot, before any kernel update happens that could replace the module for the currently running version making the mounting of {{ic|/boot/efi}} impossible until reboot.}}<br />
# Type the following command to install ''systemd-boot'': {{bc|1=# bootctl --path=''esp'' install}} It will copy the ''systemd-boot'' binary to your EFI System Partition ({{ic|''esp''/EFI/systemd/systemd-bootx64.efi}} and {{ic|''esp''/EFI/Boot/BOOTX64.EFI}} – both of which are identical – on x86-64 systems) and add ''systemd-boot'' itself as the default EFI application (default boot entry) loaded by the EFI Boot Manager.<br />
# Finally you must [[#Configuration|configure]] the boot loader to function properly.<br />
<br />
=== BIOS boot ===<br />
<br />
{{Warning|This is not recommended.}}<br />
You can successfully install ''systemd-boot'' if booted with in BIOS mode. However, this process requires you to tell firmware to launch ''systemd-boot'''s EFI file at boot, usually via two ways:<br />
<br />
* you have a working EFI Shell somewhere else.<br />
<br />
* your firmware interface provides a way of properly setting the EFI file that needs to be loaded at boot time.<br />
<br />
If you can do it, the installation is easier: go into your EFI Shell or your firmware configuration interface and change your machine's default EFI file to {{ic|''esp''/EFI/systemd/systemd-bootx64.efi}} ( or {{ic|systemd-bootia32.efi}} depending if your system firmware is 32 bit).<br />
<br />
{{Note|the firmware interface of Dell Latitude series provides everything you need to setup EFI boot but the EFI Shell won't be able to write to the computer's ROM.}}<br />
<br />
=== Updating ===<br />
<br />
Unlike the previous separate ''gummiboot'' package, which updated automatically on a new package release with a {{ic|post_install}} script, updates of new ''systemd-boot'' versions must now be done manually by the user. However the procedure can be automated using pacman hooks.<br />
<br />
==== Manually ====<br />
<br />
''systemd-boot'' ({{man|1|bootctl}}) assumes that your EFI System Partition is mounted on {{ic|/boot}}.<br />
<br />
# bootctl update<br />
<br />
If the ESP is not mounted on {{ic|/boot}}, the {{ic|1=--path=}} option can pass it. For example: <br />
<br />
# bootctl --path=''esp'' update<br />
<br />
{{Note|This is also the command to use when migrating from ''gummiboot'', before removing that package. If that package has already been removed, however, run {{ic|1=bootctl --path=''esp'' install}}.}}<br />
<br />
==== Automatically ====<br />
<br />
The [[AUR]] package {{AUR|systemd-boot-pacman-hook}} provides a [[Pacman#Hooks|Pacman hook]] to automate the update process. [[Install|Installing]] the package will add a hook which will be executed every time the {{Pkg|systemd}} package is upgraded.<br />
<br />
Alternatively, place the following pacman hook in the {{ic|/etc/pacman.d/hooks/}} directory:<br />
<br />
{{hc|/etc/pacman.d/hooks/systemd-boot.hook|2=<br />
[Trigger]<br />
Type = Package<br />
Operation = Upgrade<br />
Target = systemd<br />
<br />
[Action]<br />
Description = Updating systemd-boot...<br />
When = PostTransaction<br />
Exec = /usr/bin/bootctl update<br />
}}<br />
<br />
== Configuration ==<br />
<br />
=== Basic configuration ===<br />
<br />
The basic configuration is stored in {{ic|''esp''/loader/loader.conf}} file and it is composed by three options:<br />
<br />
* {{ic|default}} – default entry to select (without the {{ic|.conf}} suffix); can be a wildcard like {{ic|arch-*}}.<br />
<br />
* {{ic|timeout}} – menu timeout in seconds. If this is not set, the menu will only be shown on key press during boot.<br />
<br />
* {{ic|editor}} – whether to enable the kernel parameters editor or not. {{ic|1}} (default) is enabled, {{ic|0}} is disabled; since the user can add {{ic|1=init=/bin/bash}} to bypass root password and gain root access, it is strongly recommended to set this option to {{ic|0}}.<br />
<br />
Example:<br />
<br />
{{hc|''esp''/loader/loader.conf|<br />
default arch<br />
timeout 4<br />
editor 0<br />
}}<br />
<br />
{{Note|The first 2 options can be changed in the boot menu itself and changes will be stored as EFI variables.}}<br />
<br />
{{Tip|A basic configuration file example is located at {{ic|/usr/share/systemd/bootctl/loader.conf}}.}}<br />
<br />
=== Adding boot entries ===<br />
<br />
{{Note|<br />
* ''bootctl'' will automatically check for "'''Windows Boot Manager'''" ({{ic|\EFI\Microsoft\Boot\Bootmgfw.efi}}), "'''EFI Shell'''" ({{ic|\shellx64.efi}}) and "'''EFI Default Loader'''" ({{ic|\EFI\Boot\bootx64.efi}}) at boot time, as well as specially prepared kernel files found in {{ic|\EFI\Linux}}. When detected, corresponding entries with titles {{ic|auto-windows}}, {{ic|auto-efi-shell}} and {{ic|auto-efi-default}}, respectively, will be automatically generated. These entries do not require manual loader configuration. However, it does not auto-detect other EFI applications (unlike [[rEFInd]]), so for booting the Linux kernel, manual configuration entries must be created.<br />
<br />
* If you dual-boot Windows, it is strongly recommended to disable its default [[Dual boot with Windows#Fast_Start-Up|Fast Start-Up]] option.<br />
* Remember to load the intel [[microcode]] with {{ic|initrd}} if applicable.<br />
* You can find the {{ic|PARTUUID}} for your root partition with the command {{ic|1=blkid -s PARTUUID -o value /dev/sd''xY''}}, where {{ic|''x''}} is the device letter and {{ic|''Y''}} is the partition number. This is required only for your root partition, not {{ic|''esp''}}.}}<br />
<br />
''bootctl'' searches for boot menu items in {{ic|''esp''/loader/entries/*.conf}} – each file found must contain exactly one boot entry. The possible options are:<br />
<br />
* {{ic|title}} – operating system name. '''Required.'''<br />
<br />
* {{ic|version}} – kernel version, shown only when multiple entries with same title exist. Optional.<br />
<br />
* {{ic|machine-id}} – machine identifier from {{ic|/etc/machine-id}}, shown only when multiple entries with same title and version exist. Optional.<br />
<br />
* {{ic|efi}} – EFI program to start, relative to your ESP ({{ic|''esp''}}); e.g. {{ic|/vmlinuz-linux}}. Either this or {{ic|linux}} (see below) is '''required.'''<br />
<br />
* {{ic|options}} – command line options to pass to the EFI program or kernel boot parameters. Optional, but you will need at least {{ic|1=initrd=''efipath''}} and {{ic|1=root=''dev''}} if booting Linux.<br />
<br />
For Linux, you can specify {{ic|linux ''path-to-vmlinuz''}} and {{ic|initrd ''path-to-initramfs''}}; this will be automatically translated to {{ic|efi ''path''}} and {{ic|1=options initrd=''path''}} – this syntax is only supported for convenience and has no differences in function.<br />
<br />
{{Style|There shouldn't be so many examples for specifying mount options or [[kernel parameters]].}}<br />
<br />
==== Standard root installations ====<br />
<br />
Here is an example entry for a root partition without LVM or LUKS:<br />
<br />
{{hc|''esp''/loader/entries/arch.conf|2=<br />
title Arch Linux<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=PARTUUID=14420948-2cea-4de7-b042-40f67c618660 rw<br />
}}<br />
<br />
Please note in the example above that {{ic|PARTUUID}}/{{ic|PARTLABEL}} identifies a GPT partition, and differs from {{ic|UUID}}/{{ic|LABEL}}, which identifies a filesystem. Using the {{ic|PARTUUID}}/{{ic|PARTLABEL}} is advantageous because it is invariant (i.e. unchanging) if you reformat the partition with another filesystem, or if the {{ic|/dev/sd* }}mapping changed for some reason. It is also useful if you do not have a filesystem on the partition (or use LUKS, which does not support {{ic|LABEL}}s).<br />
<br />
{{Tip|An example entry file is located at {{ic|/usr/share/systemd/bootctl}}.}}<br />
<br />
==== LVM root installations ====<br />
<br />
{{Warning|''systemd-boot'' cannot be used without a separate {{ic|/boot}} filesystem outside of LVM.}}<br />
<br />
Here is an example for a root partition using [[LVM|Logical Volume Management]]:<br />
<br />
{{hc|''esp''/loader/entries/arch-lvm.conf|2=<br />
title Arch Linux (LVM)<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=/dev/mapper/<VolumeGroup-LogicalVolume> rw<br />
}}<br />
<br />
Replace {{ic|<VolumeGroup-LogicalVolume>}} with the actual VG and LV names (e.g. {{ic|1=root=/dev/mapper/volgroup00-lvolroot}}). Alternatively, it is also possible to use a UUID instead:<br />
....<br />
options root=UUID=<UUID identifier> rw<br />
<br />
Note that {{ic|1=root='''UUID'''=}} is used instead of {{ic|1=root='''PARTUUID'''=}}, which is used for Root partitions without LVM or LUKS.<br />
<br />
==== Encrypted Root Installations ====<br />
<br />
Here is an example configuration file for an encrypted root partition ([[Dm-crypt|DM-Crypt / LUKS]]) using the {{ic|encrypt}} [[mkinitcpio]] hook:<br />
<br />
{{hc|''esp''/loader/entries/arch-encrypted.conf|2=<br />
title Arch Linux Encrypted<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options cryptdevice=UUID=<UUID>:<mapped-name> root=/dev/mapper/<mapped-name> quiet rw<br />
}}<br />
<br />
UUID is used in this example; {{ic|PARTUUID}} should be able to replace the UUID, if so desired. You may also replace the {{ic|/dev}} path with a regular UUID. {{ic|mapped-name}} is whatever you want it to be called. See [[Dm-crypt/System configuration#Boot loader]].<br />
<br />
If you are using LVM, your cryptdevice line will look like this:<br />
<br />
{{hc|''esp''/loader/entries/arch-encrypted-lvm.conf|2=<br />
title Arch Linux Encrypted LVM<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options cryptdevice=UUID=<UUID>:MyVolGroup root=/dev/mapper/MyVolGroup-MyVolRoot quiet rw<br />
}}<br />
<br />
You can also add other EFI programs such as {{ic|\EFI\arch\grub.efi}}.<br />
<br />
==== btrfs subvolume root installations ====<br />
<br />
If booting a [[btrfs]] subvolume as root, amend the {{ic|options}} line with {{ic|rootflags<nowiki>=</nowiki>subvol<nowiki>=</nowiki><root subvolume>}}. In the example below, root has been mounted as a btrfs subvolume called 'ROOT' (e.g. {{ic|mount -o subvol<nowiki>=</nowiki>ROOT /dev/sdxY /mnt}}):<br />
<br />
{{hc|''esp''/loader/entries/arch-btrfs-subvol.conf|2=<br />
title Arch Linux<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options root=PARTUUID=14420948-2cea-4de7-b042-40f67c618660 rw rootflags<nowiki>=</nowiki>subvol<nowiki>=</nowiki>ROOT<br />
}}<br />
<br />
A failure to do so will otherwise result in the following error message: {{ic|ERROR: Root device mounted successfully, but /sbin/init does not exist.}}<br />
<br />
==== ZFS root installations ====<br />
<br />
When booting from a [[ZFS]] dataset, add {{ic|zfs<nowiki>=</nowiki><root dataset>}} to the {{ic|options}} line. Here the root dataset has been set to 'zroot/ROOT/default':<br />
<br />
{{hc|''esp''/loader/entries/arch-zfs.conf|2=<br />
title Arch Linux ZFS<br />
linux /vmlinuz-linux<br />
initrd /initramfs-linux.img<br />
options zfs=zroot/ROOT/default rw<br />
}}<br />
<br />
When booting off of a ZFS dataset ensure that it has had the {{ic|bootfs}} property set with {{ic| zpool set bootfs<nowiki>=</nowiki><root dataset> <zpool>}}.<br />
<br />
==== EFI Shells or other EFI apps ====<br />
<br />
In case you installed EFI shells and other EFI application into the ESP, you can use the following snippets:<br />
<br />
{{hc|''esp''/loader/entries/uefi-shell-v1-x86_64.conf|2=<br />
title UEFI Shell x86_64 v1<br />
efi /EFI/shellx64_v1.efi<br />
}}<br />
<br />
{{hc|''esp''/loader/entries/uefi-shell-v2-x86_64.conf|2=<br />
title UEFI Shell x86_64 v2<br />
efi /EFI/shellx64_v2.efi<br />
}}<br />
<br />
{{Expansion|Add example on how to boot into EFI firmware setup.}}<br />
<br />
=== Preparing kernels for EFI\Linux ===<br />
<br />
{{Style|Does not belong here, not specific to systemd-boot.}}<br />
<br />
''EFI\Linux'' is searched for specially prepared kernel files, which bundle the kernel, the initrd, the kernel command line and /etc/os-release into one file. This file can be easily signed for secure boot.<br />
<br />
Create the bundle file like this:<br />
<br />
{{hc|Kernel packaging command:|<nowiki>objcopy \<br />
--add-section .osrel="/usr/lib/os-release" --change-section-vma .osrel=0x20000 \<br />
--add-section .cmdline="kernel command line" --change-section-vma .cmdline=0x30000 \<br />
--add-section .linux="vmlinuz-file" --change-section-vma .linux=0x40000 \<br />
--add-section .initrd="initrd-file" --change-section-vma .initrd=0x3000000 \<br />
"/usr/lib/systemd/boot/efi/linuxx64.efi.stub" "linux.efi"</nowiki>}}<br />
<br />
Optionally sign ''linux.efi'' now (e.g. using ''sbsigntools'' from AUR).<br />
<br />
Copying ''linux.efi'' into ''{{ic|''esp''\EFI\Linux}}''.<br />
<br />
=== Support hibernation ===<br />
<br />
See [[Suspend and hibernate]].<br />
<br />
=== Kernel parameters editor with password protection ===<br />
<br />
Alternatively you can install {{AUR|systemd-boot-password}} which supports {{ic|password}} basic configuration option. Use {{ic|sbpctl generate}} to generate a value for this option.<br />
<br />
Install ''systemd-boot-password'' with the following command:<br />
<br />
{{bc|1=# sbpctl install ''esp''}}<br />
<br />
With enabled editor you will be prompted for your password before you can edit kernel parameters.<br />
<br />
== Keys inside the boot menu ==<br />
<br />
The following keys are used inside the menu:<br />
* {{ic|Up/Down}} - select entry<br />
* {{ic|Enter}} - boot the selected entry<br />
* {{ic|d}} - select the default entry to boot (stored in a non-volatile EFI variable)<br />
* {{ic|-/T}} - decrease the timeout (stored in a non-volatile EFI variable)<br />
* {{ic|+/t}} - increase the timeout (stored in a non-volatile EFI variable)<br />
* {{ic|e}} - edit the kernel command line. It has no effect if the {{ic|editor}} config option is set to {{ic|0}}.<br />
* {{ic|v}} - show the gummiboot and UEFI version<br />
* {{ic|Q}} - quit<br />
* {{ic|P}} - print the current configuration<br />
* {{ic|h/?}} - help<br />
<br />
These hotkeys will, when pressed inside the menu or during bootup, directly boot<br />
a specific entry:<br />
<br />
* {{ic|l}} - Linux<br />
* {{ic|w}} - Windows<br />
* {{ic|a}} - OS X<br />
* {{ic|s}} - EFI Shell<br />
* {{ic|1-9}} - number of entry<br />
<br />
== Troubleshooting ==<br />
<br />
=== Manual entry using efibootmgr ===<br />
<br />
If {{ic|bootctl install}} command failed, you can create a EFI boot entry manually using {{Pkg|efibootmgr}}:<br />
<br />
# efibootmgr -c -d /dev/sdX -p Y -l /EFI/systemd/systemd-bootx64.efi -L "Linux Boot Manager"<br />
<br />
where {{ic|/dev/sdXY}} is the [[EFI System Partition]].<br />
<br />
=== Menu does not appear after Windows upgrade ===<br />
<br />
See [[UEFI#Windows changes boot order]].<br />
<br />
=== Non-root LUKS-encrypted drives are not decrypted when using sd-lvm2 mkinitcpio hook, even if properly listed in /etc/crypttab ===<br />
<br />
This may make the boot process fail, due to LVM's inability to locate the physical volumes. The latter manifests as timeout errors:<br />
<br />
Timed out waiting for device dev-mapper-VolGroup00\x2dLVNAME.device.<br />
<br />
(This is not unexpected if {{ic|systemd-cryptsetup-generator}} never decrypted the device which stores {{ic|VolGroup00/LVNAME}}.)<br />
<br />
Make sure that in kernel boot parameters, {{ic|rd.luks.uuid}} is used to specify the LUKS device containing the root volume, and not just {{ic|luks.uuid}}.<br />
<br />
== See also ==<br />
<br />
* http://www.freedesktop.org/wiki/Software/systemd/systemd-boot/</div>Veoxhttps://wiki.archlinux.org/index.php?title=LVM&diff=357334LVM2015-01-20T15:03:22Z<p>Veox: /* Remove logical volume */ typo: goup->group</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:File systems]]<br />
[[cs:LVM]]<br />
[[de:LVM]]<br />
[[es:LVM]]<br />
[[fr:LVM]]<br />
[[it:LVM]]<br />
[[ja:LVM]]<br />
[[ru:LVM]]<br />
[[tr:LVM]]<br />
[[zh-CN:LVM]]<br />
{{Related articles start}}<br />
{{Related|Software RAID and LVM}}<br />
{{Related|dm-crypt/Encrypting an entire system#LVM on LUKS}}<br />
{{Related|dm-crypt/Encrypting an entire system#LUKS on LVM}}<br />
{{Related articles end}}<br />
From [[Wikipedia:Logical Volume Manager (Linux)]]:<br />
:LVM is a [[Wikipedia:logical volume management|logical volume manager]] for the [[Wikipedia:Linux kernel|Linux kernel]]; it manages disk drives and similar mass-storage devices.<br />
<br />
=== LVM Building Blocks ===<br />
<br />
Logical Volume Management utilizes the kernel's [http://sources.redhat.com/dm/ device-mapper] feature to provide a system of partitions independent of underlying disk layout. With LVM you abstract your storage and have "virtual partitions", making extending/shrinking easier (subject to potential filesystem limitations). Virtual partitions allow addition and removal without worry of whether you have enough contiguous space on a particular disk, getting caught up fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way. This is strictly an ease-of-management solution: LVM adds no security.<br />
<br />
Basic building blocks of LVM:<br />
<br />
* '''Physical volume (PV)''': Partition on hard disk (or even the disk itself or loopback file) on which you can have volume groups. It has a special header and is divided into physical extents. Think of physical volumes as big building blocks used to build your hard drive.<br />
* '''Volume group (VG)''': Group of physical volumes used as a storage volume (as one disk). They contain logical volumes. Think of volume groups as hard drives.<br />
* '''Logical volume (LV)''': A "virtual/logical partition" that resides in a volume group and is composed of physical extents. Think of logical volumes as normal partitions.<br />
* '''Physical extent (PE)''': The smallest size in the physical volume that can be assigned to a logical volume (default 4MiB). Think of physical extents as parts of disks that can be allocated to any partition.<br />
<br />
Example:<br />
'''Physical disks'''<br />
<br />
Disk1 (/dev/sda):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 50GB (Physical volume) |Partition2 80GB (Physical volume) |<br />
|/dev/sda1 |/dev/sda2 |<br />
|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
Disk2 (/dev/sdb):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br />
|Partition1 120GB (Physical volume) |<br />
|/dev/sdb1 |<br />
| _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _|<br />
<br />
'''LVM logical volumes'''<br />
<br />
Volume Group1 (/dev/MyStorage/ = /dev/sda1 + /dev/sda2 + /dev/sdb1):<br />
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <br />
|Logical volume1 15GB |Logical volume2 35GB |Logical volume3 200GB |<br />
|/dev/MyStorage/rootvol|/dev/MyStorage/homevol |/dev/MyStorage/mediavol |<br />
|_ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |<br />
<br />
=== Advantages ===<br />
<br />
LVM gives you more flexibility than just using normal hard drive partitions:<br />
* Use any number of disks as one big disk.<br />
* Have logical volumes stretched over several disks.<br />
* Create small logical volumes and resize them "dynamically" as they get filled up.<br />
* Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.<br />
* Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some (such as ext4) support online resizing.<br />
* Online/live migration of LV being used by services to different disks without having to restart services.<br />
* Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.<br />
* Support for various device-mapper targets, including transparent filesystem encryption and caching of frequently used data.<br />
<br />
=== Disadvantages ===<br />
* Additional steps in setting up the system, more complicated.<br />
<br />
== Installing Arch Linux on LVM ==<br />
<br />
You should create your LVM Volumes between the [[partitioning]] and [[File_systems#Create a filesystem|formatting]] steps of the [[Installation guide|installation procedure]]. Instead of directly formatting a partition to be your root file system, the file system will be created inside a logical volume (LV). <br />
<br />
Make sure the {{pkg|lvm2}} package is [[pacman|installed]].<br />
<br />
Quick overview: <br />
* Create partition(s) where your PV(s) will reside. Set the partition type to 'Linux LVM', which is 8e if you use MBR, 8e00 for GPT.<br />
* Create your physical volumes (PVs). If you have one disk it is best to just create one PV in one large partition. If you have multiple disks you can create partitions on each of them and create a PV on each partition.<br />
* Create your volume group (VG) and add all PVs to it.<br />
* Create logical volumes (LVs) inside that VG.<br />
* Continue with “Format the partitions” step of [[Beginners' guide]].<br />
* When you reach the “Create initial ramdisk environment” step in the Beginners Guide, add the {{ic|lvm}} hook to {{ic|/etc/mkinitcpio.conf}} (see below for details).<br />
<br />
{{Warning|{{ic|/boot}} cannot reside in LVM when using [[GRUB Legacy]], which does not support LVM. [[GRUB]] users do not have this limitation. If you need to use GRUB Legacy, you must create a separate {{ic|/boot}} partition and format it directly. }}<br />
<br />
=== Create partitions ===<br />
{{Note|This step is optional and depends on the users preference. In most cases it is recommended to partition the device first, though.}}<br />
See [[Partitioning]] on how to create partitions on your device.<br />
<br />
=== Create physical volumes ===<br />
To list all your devices capable of being used as a physical volume:<br />
# lvmdiskscan<br />
<br />
{{Warning|Make sure you target the correct device, or below commands will result in data loss!}}<br />
<br />
Create a physical volume on them:<br />
<br />
# pvcreate ''DEVICE''<br />
<br />
This command creates a header on each device so it can be used for LVM. As defined in [[LVM#LVM_Building_Blocks]], ''DEVICE'' can be a disk (e.g. {{ic|/dev/sda}}), a partition (e.g. {{ic|/dev/sda2}}) or a loop back device. For example: <br />
<br />
# pvcreate /dev/sda2<br />
<br />
You can track created physical volumes with:<br />
# pvdisplay<br />
<br />
{{Note|If using a SSD without partitioning it first, use {{ic|pvcreate --dataalignment 1m /dev/sda}} (for erase block size < 1MiB), see e.g. [http://serverfault.com/questions/356534/ssd-erase-block-size-lvm-pv-on-raw-device-alignment here]}}<br />
<br />
=== Create volume group ===<br />
<br />
The next step is to create a volume group on this physical volume.<br />
<br />
First you need to create a volume group on one of the physical volumes:<br />
<br />
# vgcreate <''volume_group''> <''physical_volume''><br />
<br />
For example:<br />
<br />
# vgcreate VolGroup00 /dev/sda2<br />
<br />
Then add to it all other physical volumes you want to have in it:<br />
<br />
# vgextend <''volume_group''> <''physical_volume''><br />
# vgextend <''volume_group''> <''another_physical_volume''><br />
# ...<br />
<br />
For example:<br />
<br />
# vgextend VolGroup00 /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdc<br />
<br />
You can track how your volume group grows with:<br />
<br />
# vgdisplay<br />
<br />
{{Note|You can create more than one volume group if you need to, but then you will not have all your storage presented as one disk.}}<br />
<br />
=== Create in one step ===<br />
<br />
LVM allows you to combine the creation of a volume group and the physical volumes in one easy step. For example, to create the group VolGroup00 with the three devices mentioned above, you can run:<br />
<br />
# vgcreate VolGroup00 /dev/sda2 /dev/sdb1 /dev/sdc<br />
<br />
This command will first set up the three partitions as physical volumes (if necessary) and then create the volume group with the three volumes. The command will warn you it detects an existing filesystem on any of the devices.<br />
<br />
=== Create logical volumes ===<br />
<br />
Now we need to create logical volumes on this volume group. You create a logical volume with the next command by giving the name of a new logical volume, its size, and the volume group it will live on:<br />
<br />
# lvcreate -L <''size''> <''volume_group''> -n <''logical_volume''><br />
<br />
For example:<br />
<br />
# lvcreate -L 10G VolGroup00 -n lvolhome<br />
<br />
This will create a logical volume that you can access later with {{ic|/dev/mapper/Volgroup00-lvolhome}} or {{ic|/dev/VolGroup00/lvolhome}}. Same as with the volume groups, you can use any name you want for your logical volume when creating it.<br />
<br />
You can also specify one or more physical volumes to restrict where LVM allocates the data. For example, you may wish to create a logical volume for the root filesystem on your small SSD, and your home volume on a slower mechanical drive. Simply add the physical volume devices to the command line, for example:<br />
<br />
# lvcreate -L 10G VolGroup00 -n lvolhome /dev/sdc1<br />
<br />
If you want to fill all the free space left on a volume group, use the next command:<br />
<br />
# lvcreate -l +100%FREE <''volume_group''> -n <''logical_volume''><br />
<br />
You can track created logical volumes with:<br />
<br />
# lvdisplay<br />
<br />
{{Note|You may need to load the ''device-mapper'' kernel module ('''modprobe dm-mod''') for the above commands to succeed.}}<br />
<br />
{{Tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}<br />
<br />
=== Create file systems and mount logical volumes ===<br />
<br />
Your logical volumes should now be located in {{ic|/dev/mapper/}} and {{ic|/dev/''YourVolumeGroupName''}}. If you cannot find them, use the next commands to bring up the module for creating device nodes and to make volume groups available:<br />
<br />
# modprobe dm-mod<br />
# vgscan<br />
# vgchange -ay<br />
<br />
Now you can create file systems on logical volumes and mount them as normal partitions (if you are installing Arch linux, refer to [[Beginners' guide#Mount the partitions|mounting the partitions]] for additional details):<br />
<br />
# mkfs.<''fstype''> /dev/mapper/<''volume_group''>-<''logical_volume''><br />
# mount /dev/mapper/<''volume_group''>-<''logical_volume''> /<''mountpoint''><br />
<br />
For example:<br />
<br />
# mkfs.ext4 /dev/mapper/VolGroup00-lvolhome<br />
# mount /dev/mapper/VolGroup00-lvolhome /home<br />
<br />
{{Warning|When choosing mountpoints, just select your newly created logical volumes (use: {{ic|/dev/mapper/Volgroup00-lvolhome}}). Do '''not''' select the actual partitions on which logical volumes were created (do not use: {{ic|/dev/sda2}}).}}<br />
<br />
=== Add lvm hook to mkinitcpio.conf ===<br />
<br />
You will need to make sure the {{Ic|udev}} and {{Ic|lvm2}} [[mkinitcpio]] hooks are enabled.<br />
<br />
{{Ic|udev}} is there by default. Edit the file and insert {{Ic|lvm2}} between {{Ic|block}} and {{Ic|filesystems}} like so:<br />
<br />
{{hc|1= /etc/mkinitcpio.conf|2= HOOKS="base udev ... block '''lvm2''' filesystems"}}<br />
<br />
Afterwards, you can continue in normal installation instructions with the [[Mkinitcpio#Image_creation_and_activation|create an initial ramdisk]] step.<br />
<br />
== Configuration ==<br />
<br />
=== Advanced options ===<br />
<br />
If you need monitoring (needed for snapshots) you can enable lvmetad. <br />
For this set {{ic|1=use_lvmetad = 1}} in {{ic|/etc/lvm/lvm.conf}}.<br />
This is the default by now. <br />
<br />
You can restrict the volumes that are activated automatically by setting the {{Ic|auto_activation_volume_list}} in {{Ic|/etc/lvm/lvm.conf}}. If in doubt, leave this option commented out.<br />
<br />
=== Grow physical volume ===<br />
<br />
After changing the size of a device that has a physical volume on it, you need to grow the physical volume using the following command:<br />
<br />
# pvresize ''DEVICE''<br />
<br />
For example, to grow a physical volume located on a mdadm [[RAID]] array:<br />
<br />
# pvresize /dev/md0<br />
<br />
{{Note|This command can be done while the volume is online.}}<br />
<br />
=== Grow logical volume ===<br />
<br />
To increase the available space on a filesystem that resides on a logical volume, two steps need to be done. First grow the logical volume, and then resize the [[File systems|filesystem]] to use the newly created free space.<br />
<br />
==== lvextend ====<br />
<br />
To grow a logical volume, two different commands can be used, {{ic|lvextend}} or {{ic|lvresize}}.<br />
<br />
# lvextend -L +<''size''> <''volume_group''>/<''logical_volume''><br />
<br />
For example:<br />
<br />
# lvextend -L +20G VolGroup00/lvolhome<br />
<br />
If you want to fill all the free space on a volume group, use the following command:<br />
<br />
# lvextend -l +100%FREE <''volume_group''>/<''logical_volume''><br />
<br />
==== resize2fs ====<br />
<br />
To resize an ext2,ext3 or ext4 filesystem:<br />
<br />
# resize2fs /dev/<''volume_group''>/<''logical_volume''><br />
<br />
{{Warning|Not all file systems support growing without loss of data and/or growing online.}}<br />
<br />
{{Note|If you do not resize your filesystem, there will not be more free space available on the filesystem. The logical volume will be bigger but partly unused.}}<br />
<br />
For example:<br />
<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
=== Shrink logical volume ===<br />
<br />
Because your file system is probably as big as the logical volume it resides on, you need to shrink the file system first and then shrink the logical volume. Depending on your file system, you may need to unmount it first. Let us say we have a logical volume of 15 GB with ext3 on it and we want to shrink it to 10 GB.<br />
<br />
First shrink the filesystem more than needed so that when you later shrink the logical volume you do not accidentally cut off the end of the filesystem:<br />
<br />
# resize2fs /dev/VolGroup00/lvolhome 9G<br />
<br />
Now resize the logical volume:<br />
<br />
# lvreduce -L 10G VolGroup00/lvolhome<br />
<br />
{{Note|In order to specify a relative size with ''lvreduce'', it is still needed to prefix the size with the {{ic|-}} sign.}}<br />
<br />
{{Tip|Alternatively, you may use ''lvresize'' instead of ''lvreduce'':<br />
<br />
# lvresize -L -5G VolGroup00/lvolhome<br />
}}<br />
<br />
Finally, normally grow the filesystem to fill all the free space left on the logical volume:<br />
<br />
# resize2fs /dev/VolGroup00/lvolhome<br />
<br />
{{Warning|<br />
* Do not reduce the file system size to less than the amount of space occupied by data or you risk data loss.<br />
* Not all file systems support shrinking without loss of data and/or shrinking online.<br />
}}<br />
<br />
=== Remove logical volume ===<br />
<br />
{{Warning|Before you remove a logical volume, make sure to move all data that you want to keep somewhere else; otherwise, it will be lost!}}<br />
<br />
First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes with:<br />
<br />
# lvs<br />
<br />
Next, look up the mountpoint of the chosen logical volume:<br />
<br />
$ lsblk<br />
<br />
Then unmount the filesystem on the logical volume:<br />
<br />
# umount /<''mountpoint''><br />
<br />
Finally, remove the logical volume:<br />
<br />
# lvremove <''volume_group''>/<''logical_volume''><br />
<br />
For example:<br />
<br />
# lvremove VolGroup00/lvolhome<br />
<br />
Confirm by typing in {{ic|y}}.<br />
<br />
Update {{ic|/etc/fstab}} as necessary.<br />
<br />
You can verify the removal of the logical volume by typing {{ic|lvs}} as root again (see first step of this section).<br />
<br />
=== Add physical volume to a volume group ===<br />
<br />
You first create a new physical volume on the block device you wish to use, then extend your volume group<br />
<br />
{{bc|1=<br />
# pvcreate /dev/sdb1<br />
# vgextend VolGroup00 /dev/sdb1<br />
}}<br />
<br />
This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.<br />
<br />
{{Note|It is considered good form to have a [[Partitioning|partition table]] on your storage medium below LVM. Use the appropriate type code: {{ic|8e}} for MBR, and {{ic|8e00}} for GPT partitions.}}<br />
<br />
=== Remove partition from a volume group ===<br />
<br />
All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:<br />
# pvmove /dev/sdb1<br />
If you want to have the data on a specific physical volume, specify that as the second argument to {{Ic|pvmove}}:<br />
# pvmove /dev/sdb1 /dev/sdf1<br />
Then the physical volume needs to be removed from the volume group:<br />
# vgreduce myVg /dev/sdb1<br />
Or remove all empty physical volumes:<br />
# vgreduce --all vg0<br />
<br />
And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:<br />
# pvremove /dev/sdb1<br />
<br />
<br />
=== Deactivate volume group ===<br />
<br />
Just invoke <br />
# vgchange -a n my_volume_group<br />
<br />
This will deactivate the volume group and allow you to unmount the container it is stored in.<br />
<br />
=== Snapshots ===<br />
<br />
==== Introduction ====<br />
<br />
LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35GB of data using just 2GB of free space so long as you modify less than 2GB (on both the original and snapshot).<br />
<br />
==== Configuration ====<br />
<br />
You create snapshot logical volumes just like normal ones.<br />
<br />
# lvcreate --size 100M --snapshot --name snap01 /dev/mapper/vg0-pv<br />
With that volume, you may modify less than 100M of data, before the snapshot volume fills up.<br />
<br />
Reverting the modified 'pv' logical volume to the state when the 'snap01' snapshot was taken can be done with<br />
<br />
{{ic|# lvconvert --merge /dev/vg0/snap01}}<br />
<br />
In case the origin logical volume is active, merging will occur on the next reboot.(Merging can be done even from a LiveCD)<br />
<br />
The snapshot will no longer exist after merging.<br />
<br />
Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.<br />
<br />
The snapshot can be mounted and backed up with '''dd''' or '''tar'''. The size of the backup file done with '''dd''' will be the size of the files residing on the snapshot volume. <br />
To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.<br />
<br />
It is important to have the ''dm_snapshot'' module listed in the MODULES variable of {{ic|/etc/mkinitcpio.conf}}, otherwise the system will not boot. If you do this on an already installed system, make sure to rebuild the image with<br />
# mkinitcpio -g /boot/initramfs-linux.img<br />
<br />
Todo: scripts to automate snapshots of root before updates, to rollback... updating {{ic|menu.lst}} to boot snapshots (separate article?)<br />
<br />
snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.<br />
<br />
See [[Create root filesystem snapshots with LVM]] for automating the creation of clean root file system snapshots during system startup for backup and rollback.<br />
<br />
[[Dm-crypt/Encrypting an entire system#LVM on LUKS]] and [[Dm-crypt/Encrypting an entire system#LUKS on LVM]].<br />
<br />
If you have LVM volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Changes that could be required due to changes in the Arch-Linux defaults ===<br />
<br />
The {{ic|1=use_lvmetad = 1}} must be set in {{ic|/etc/lvm/lvm.conf}}. This is the default now - if you have a {{ic|lvm.conf.pacnew}} file, you must merge this change.<br />
<br />
=== LVM commands do not work ===<br />
<br />
* Load proper module:<br />
# modprobe dm_mod<br />
<br />
The {{ic|dm_mod}} module should be automatically loaded. In case it does not, you can try:<br />
{{Accuracy|Should module loading at boot be done using "/etc/modules-load.d" instead?}}<br />
<br />
{{hc|/etc/mkinitcpio.conf:|<nowiki>MODULES="dm_mod ..."</nowiki>}}<br />
<br />
You will need to [[Mkinitcpio#Image_creation_and_activation|rebuild]] the initramfs to commit any changes you made.<br />
<br />
* Try preceding commands with ''lvm'' like this:<br />
# lvm pvdisplay<br />
<br />
=== Logical Volumes do not show up ===<br />
<br />
If you are trying to mount existing logical volumes, but they do not show up in {{ic|lvscan}}, you can use the following commands to activate them:<br />
<br />
# vgscan<br />
# vgchange -ay<br />
<br />
=== LVM on removable media ===<br />
<br />
Symptoms:<br />
# vgscan<br />
Reading all physical volumes. This may take a while...<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836585984: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 319836643328: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 0: Input/output error<br />
/dev/backupdrive1/backup: read failed after 0 of 4096 at 4096: Input/output error<br />
Found volume group "backupdrive1" using metadata type lvm2<br />
Found volume group "networkdrive" using metadata type lvm2<br />
<br />
Cause:<br />
:Removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:<br />
# vgchange -an ''volume group name''<br />
<br />
Fix: assuming you already tried to activate the volume group with {{ic|# vgchange -ay ''vg''}}, and are receiving the Input/output errors:<br />
# vgchange -an ''volume group name''<br />
Unplug the external drive and wait a few minutes:<br />
# vgscan<br />
# vgchange -ay ''volume group name''<br />
<br />
=== Kernel options ===<br />
<br />
In kernel options, you may need {{ic|dolvm}}. {{ic|<nowiki>root=</nowiki>}} should be set to the logical volume, e.g {{ic|/dev/mapper/''vg-name''-''lv-name''}}.<br />
<br />
=== Resizing a contiguous logical volume fails === <br />
<br />
If trying to extend a logical volume errors with:<br />
" Insufficient suitable contiguous allocatable extents for logical volume "<br />
<br />
The reason is that the logical volume was created with an explicit contiguous allocation policy (options {{ic|-C y}} or {{ic|--alloc contiguous}}) and no further adjacent contiguous extents are available (see also [http://www.hostatic.ro/2010/02/15/lvm-inherit-and-contiguous-policies/ reference]).<br />
<br />
To fix this, prior to extending the logical volume, change its allocation policy with {{ic|lvchange --alloc inherit <logical_volume>}}. If you need to keep the contiguous allocation policy, an alternative approach is to move the volume to a disk area with sufficient free extents (see [http://superuser.com/questions/435075/how-to-align-logical-volumes-on-contiguous-physical-extents]).<br />
<br />
== See also ==<br />
<br />
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org<br />
* [http://tldp.org/HOWTO/LVM-HOWTO/ LVM HOWTO] article at The Linux Documentation project<br />
* [http://wiki.gentoo.org/wiki/LVM LVM] article at Gentoo wiki<br />
* [http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ LVM2 Mirrors vs. MD Raid 1] post by Josh Bryan<br />
* [http://www.tutonics.com/2012/11/ubuntu-lvm-guide-part-1.html Ubuntu LVM Guide Part 1][http://www.tutonics.com/2012/12/lvm-guide-part-2-snapshots.html Part 2 detals snapshots]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Cron&diff=261351Cron2013-06-05T20:08:22Z<p>Veox: /* Installation */ dcron is still referenced throughout the page, so put the reference back in. (FIXME: either clean up the article or put dcron back into AUR.)</p>
<hr />
<div>[[Category:Daemons and system services]]<br />
[[de:Cron]]<br />
[[fr:Cron]]<br />
[[sk:Cron]]<br />
[[zh-CN:Cron]]<br />
{{Article summary start}}<br />
{{Article summary text|An overview of the standard task scheduling daemon on GNU/Linux systems.}}<br />
{{Article summary heading|Resources}}<br />
{{Article summary link|Gentoo Linux Cron Guide|http://www.gentoo.org/doc/en/cron-guide.xml}}<br />
{{Article summary end}}<br />
{{Lowercase_title}}<br />
<br />
From [https://en.wikipedia.org/wiki/Cron Wikipedia]:<br />
<br />
'''''cron''' is the time-based job scheduler in Unix-like computer operating systems. cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates. It is commonly used to automate system maintenance or administration [...]''<br />
<br />
== Installation ==<br />
<br />
{{Pkg|cronie}} is installed by default as part of the '''base''' group. Other cron implementations exist if preferred, Gentoo's [http://www.gentoo.org/doc/en/cron-guide.xml Cron Guide] offers comparisons. For example, {{Pkg|fcron}}, {{AUR|bcron}} or {{AUR|vixie-cron}} are other alternatives. {{AUR|dcron}} used to be the default cron implementation in Arch Linux until May 2011.<br />
<br />
== Configuration ==<br />
<br />
=== Users & autostart ===<br />
<br />
cron should be working upon login on a new system to run root scripts. This can be check by looking at the log in {{ic|/var/log/}}. In order to use crontab application (editor for job entries), users must be members of a designated group {{ic|users}} or {{ic|root}}, of which all users should already be members. To ensure cron starts on boot, enable {{ic|cronie.service}} or {{ic|dcron.service}} with {{ic|systemctl enable <service_name>}} depending on which cron implementation you use.<br />
<br />
=== Handling errors of jobs ===<br />
<br />
Errors can occur during execution of jobs. When this happens, cron registers the '''stderr''' output and attempts to send it as email to the user's spools via the {{ic|sendmail}} command.<br />
<br />
To log these messages use the {{ic|-M}} option in {{ic|/etc/conf.d/crond}} and write a script or install a rudimentary SMTP subsystem (e.g. {{Pkg|esmtp}}):<br />
<br />
# pacman -S esmtp procmail<br />
<br />
After installation configure the routing:<br />
{{hc|/etc/esmtprc|<br />
identity ''myself''@myisp.com<br />
hostname mail.myisp.com:25<br />
username ''"myself"''<br />
password ''"secret"''<br />
starttls enabled<br />
default<br />
mda "/usr/bin/procmail -d %T"<br />
}}<br />
<br />
Procmail needs root privileges to work in delivery mode but it is not an issue if you are running the cronjobs as root anyway.<br />
<br />
To test that everything works correctly, create a file {{ic|message.txt}} with {{ic|"test message"}} in it. <br />
<br />
From the same directory run:<br />
<br />
$ sendmail ''user_name'' < message.txt <br />
<br />
then:<br />
<br />
$ cat /var/spool/mail/''user_name''<br />
<br />
You should now see the test message and the time and date it was sent.<br />
<br />
The error output of all jobs will now be redirected to {{ic|/var/spool/mail/''user_name''}}.<br />
<br />
Due to the privileged issue, it is hard to create and send emails to root (e.g. {{ic|su -c ""}}). You can ask {{ic|esmtp}} to forward all root's email to an ordinary user with:<br />
{{hc|/etc/esmtprc|<br />
2=force_mda="''user-name''"<br />
}}<br />
<br />
{{Note|If the above test didn't work, you may try creating a local configuration in {{ic|~/.esmtprc}} with the same content.<br />
<br />
Run the following command to make sure it has the correct permission: <br />
<br />
$ chmod 710 ~/.esmtprc<br />
<br />
Then repeat the test with {{ic|message.txt}} exactly as before.}}<br />
<br />
==== Long cron job ====<br />
<br />
Suppose this program is invoked by cron :<br />
<br />
#!/bin/sh<br />
echo "I had a recoverable error!"<br />
sleep 1h<br />
<br />
What happens is this:<br />
# cron runs the script<br />
# as soon as cron sees some output, it runs your MTA, and provides it with the headers. It leaves the pipe open, because the job hasn't finished and there might be more output.<br />
# the MTA opens the connection to postfix and leaves that connection open while it waits for the rest of the body.<br />
# postfix closes the idle connection after less than an hour and you get an error like this :<br />
smtpmsg='421 … Error: timeout exceeded' errormsg='the server did not accept the mail'<br />
<br />
To solve this problem you can use the command chronic or sponge from {{Pkg|moreutils}}.<br />
From they respective man page :<br />
; chronic: chronic runs a command, and arranges for its standard out and standard error to only be displayed if the command fails (exits nonzero or crashes). If the command succeeds, any extraneous output will be hidden.<br />
; sponge: sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before opening the output file… If no output file is specified, sponge outputs to stdout.<br />
<br />
Even if it's not said chronic buffer the command output before opening its standard output (like sponge does).<br />
<br />
== Crontab format ==<br />
<br />
The basic format for a crontab is:<br />
<br />
<minute> <hour> <day_of_month> <month> <day_of_week> <command><br />
<br />
* ''minute'' values can be from 0 to 59.<br />
* ''hour'' values can be from 0 to 23.<br />
* ''day_of_month'' values can be from 1 to 31.<br />
* ''month'' values can be from 1 to 12.<br />
* ''day_of_week'' values can be from 0 to 6, with 0 denoting Sunday.<br />
<br />
Multiple times may be specified with a comma, a range can be given with a hyphen, and the asterisk symbol is a wildcard character. Spaces are used to separate fields. For example, the line:<br />
<br />
*0,*5 9-16 * 1-5,9-12 1-5 ~/bin/i_love_cron.sh<br />
<br />
Will execute the script {{Ic|i_love_cron.sh}} at five minute intervals from 9 AM to 4:55 PM on weekdays except during the summer months (June, July, and August). More examples and advanced configuration techniques can be found below.<br />
<br />
== Basic commands ==<br />
<br />
Crontabs should never be edited directly; instead, users should use the {{ic|crontab}} program to work with their crontabs. To be granted access to this command, user must be a member of the users group (see the {{ic|gpasswd}} command).<br />
<br />
To view their crontabs, users should issue the command:<br />
<br />
$ crontab -l<br />
<br />
To edit their crontabs, they may use:<br />
<br />
$ crontab -e<br />
<br />
To remove their crontabs, they should use:<br />
<br />
$ crontab -r<br />
<br />
If a user has a saved crontab and would like to completely overwrite their old crontab, he or she should use:<br />
<br />
$ crontab ''saved_crontab_filename''<br />
<br />
To overwrite a crontab from the command line ([[Wikipedia:stdin]]), use<br />
<br />
$ crontab - <br />
<br />
To edit somebody else's crontab, issue the following command as root:<br />
<br />
# crontab -u ''username'' -e<br />
<br />
This same format (appending {{ic|-u ''username''}} to a command) works for listing and deleting crontabs as well.<br />
<br />
To use [[nano]] rather than [[vi]] as crontab editor, add the following lines to your shell's initialization file (eg. {{ic|/etc/profile}} or {{ic|/etc/bash.bashrc}}):<br />
<br />
export EDITOR="/usr/bin/nano"<br />
<br />
And restart open shells.<br />
<br />
== Examples ==<br />
<br />
The entry:<br />
<br />
01 * * * * /bin/echo Hello, world!<br />
<br />
runs the command {{Ic|/bin/echo Hello, world!}} on the first minute of every hour of every day of every month (i.e. at 12:01, 1:01, 2:01, etc.)<br />
<br />
Similarly,<br />
<br />
*/5 * * jan mon-fri /bin/echo Hello, world!<br />
<br />
runs the same job every five minutes on weekdays during the month of January (i.e. at 12:00, 12:05, 12:10, etc.)<br />
<br />
As noted in the ''Crontab Format'' section, the line:<br />
<br />
*0,*5 9-16 * 1-5,9-12 1-5 /home/user/bin/i_love_cron.sh<br />
<br />
Will execute the script {{Ic|i_love_cron.sh}} at five minute intervals from 9 AM to 5 PM (excluding 5 PM itself) every weekday (Mon-Fri) of every month except during the summer (June, July, and August).<br />
<br />
== More information ==<br />
<br />
The cron daemon parses a configuration file known as {{ic|crontab}}. Each user on the system can maintain a separate crontab file to schedule commands individually. The root user's crontab is used to schedule system-wide tasks (though users may opt to use {{ic|/etc/crontab}} or the {{ic|/etc/cron.d}} directory, depending on which cron implementation they choose).<br />
<br />
There are slight differences between the crontab formats of the different cron daemons. The default root crontab for dcron looks like this:<br />
<br />
{{hc|/var/spool/cron/root<br />
|2=<nowiki><br />
# root crontab<br />
# DO NOT EDIT THIS FILE MANUALLY! USE crontab -e INSTEAD<br />
<br />
# man 1 crontab for acceptable formats:<br />
# <minute> <hour> <day> <month> <dow> <tags and command><br />
# <@freq> <tags and command><br />
<br />
# SYSTEM DAILY/WEEKLY/... FOLDERS<br />
@hourly ID=sys-hourly /usr/sbin/run-cron /etc/cron.hourly<br />
@daily ID=sys-daily /usr/sbin/run-cron /etc/cron.daily<br />
@weekly ID=sys-weekly /usr/sbin/run-cron /etc/cron.weekly<br />
@monthly ID=sys-monthly /usr/sbin/run-cron /etc/cron.monthly<br />
</nowiki>}}<br />
<br />
These lines exemplify one of the formats that crontab entries can have, namely whitespace-separated fields specifying:<br />
<br />
# @period<br />
# ID=jobname (this tag is specific to dcron)<br />
# command<br />
<br />
The other standard format for crontab entries is:<br />
<br />
# minute<br />
# hour<br />
# day<br />
# month<br />
# day of week<br />
# command<br />
<br />
The crontab files themselves are usually stored as {{ic|/var/spool/cron/username}}. For example, root's crontab is found at {{ic|/var/spool/cron/root}}<br />
<br />
See the crontab [[man page]] for further information and configuration examples.<br />
<br />
== run-parts issue ==<br />
<br />
cronie uses {{ic|run-parts}} to carry out script in {{ic|cron.daily}}/{{ic|cron.weekly}}/{{ic|cron.monthly}}. Be careful that the script name in these won't include a dot (.), e.g. {{ic|backup.sh}}, since {{ic|run-parts}} without options will ignore them (see: {{ic|man run-parts}}).<br />
<br />
== Running Xorg server based applications ==<br />
<br />
If you find that you can't run X apps from cron jobs then use this prefix:<br />
<br />
export DISPLAY=:0.0 ;<br />
<br />
This sets the {{ic|DISPLAY}} variable to the first display, which is usually right<br />
unless you run multiple X servers on your machine.<br />
<br />
If it still doesn't work, then you need to use {{ic|xhost}} to give your user control<br />
over X:<br />
<br />
# xhost +si:localuser:$(whoami)<br />
<br />
== Asynchronous job processing ==<br />
<br />
If you regularly turn off your computer but do not want to miss jobs, there are some solutions available (easiest to hardest):<br />
<br />
===Dcron===<br />
Vanilla dcron supports asynchronous job processing. Just put it with @hourly, @daily, @weekly or @monthly with a jobname, like this:<br />
<br />
@hourly ID=greatest_ever_job echo This job is very useful.<br />
<br />
===Cronwhip===<br />
([https://aur.archlinux.org/packages.php?ID=21079 AUR], [https://bbs.archlinux.org/viewtopic.php?id=57973 forum thread]): Script to automatically run missed cron jobs; works with the former default cron implementation, dcron.<br />
<br />
===Anacron===<br />
([https://aur.archlinux.org/packages.php?ID=5196 AUR]): Full replacement for dcron, processes jobs asynchronously.<br />
<br />
===Fcron===<br />
([https://www.archlinux.org/packages/community/i686/fcron/ Community], [https://bbs.archlinux.org/viewtopic.php?id=140497 forum thread]): Like anacron, fcron assumes the computer is not always running and, unlike anacron, it can schedule events at intervals shorter than a single day. Like cronwhip, it can run jobs that should have been run during the computer's downtime.<br />
<br />
== Ensuring exclusivity ==<br />
<br />
If you run potentially long-running jobs (e.g., a backup might all of a sudden run for a long time, because of many changes or a particular slow network connection), then {{AUR|lockrun}} can ensure that the cron job won't start a second time.<br />
<br />
5,35 * * * * /usr/bin/lockrun -n /tmp/lock.backup /root/make-backup.sh<br />
<br />
== See Also ==<br />
* [http://gotux.net/arch-linux/crontab-usage/ CronTab Usage Tutorial]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Cron&diff=261329Cron2013-06-05T19:52:52Z<p>Veox: /* Installation */ dcron is no longer available in the AUR.</p>
<hr />
<div>[[Category:Daemons and system services]]<br />
[[de:Cron]]<br />
[[fr:Cron]]<br />
[[sk:Cron]]<br />
[[zh-CN:Cron]]<br />
{{Article summary start}}<br />
{{Article summary text|An overview of the standard task scheduling daemon on GNU/Linux systems.}}<br />
{{Article summary heading|Resources}}<br />
{{Article summary link|Gentoo Linux Cron Guide|http://www.gentoo.org/doc/en/cron-guide.xml}}<br />
{{Article summary end}}<br />
{{Lowercase_title}}<br />
<br />
From [https://en.wikipedia.org/wiki/Cron Wikipedia]:<br />
<br />
'''''cron''' is the time-based job scheduler in Unix-like computer operating systems. cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates. It is commonly used to automate system maintenance or administration [...]''<br />
<br />
== Installation ==<br />
<br />
{{Pkg|cronie}} is installed by default as part of the '''base''' group. Other cron implementations exist if preferred, Gentoo's [http://www.gentoo.org/doc/en/cron-guide.xml Cron Guide] offers comparisons. For example, {{Pkg|fcron}}, {{AUR|bcron}} or {{AUR|vixie-cron}} are other alternatives.<br />
<br />
== Configuration ==<br />
<br />
=== Users & autostart ===<br />
<br />
cron should be working upon login on a new system to run root scripts. This can be check by looking at the log in {{ic|/var/log/}}. In order to use crontab application (editor for job entries), users must be members of a designated group {{ic|users}} or {{ic|root}}, of which all users should already be members. To ensure cron starts on boot, enable {{ic|cronie.service}} or {{ic|dcron.service}} with {{ic|systemctl enable <service_name>}} depending on which cron implementation you use.<br />
<br />
=== Handling errors of jobs ===<br />
<br />
Errors can occur during execution of jobs. When this happens, cron registers the '''stderr''' output and attempts to send it as email to the user's spools via the {{ic|sendmail}} command.<br />
<br />
To log these messages use the {{ic|-M}} option in {{ic|/etc/conf.d/crond}} and write a script or install a rudimentary SMTP subsystem (e.g. {{Pkg|esmtp}}):<br />
<br />
# pacman -S esmtp procmail<br />
<br />
After installation configure the routing:<br />
{{hc|/etc/esmtprc|<br />
identity ''myself''@myisp.com<br />
hostname mail.myisp.com:25<br />
username ''"myself"''<br />
password ''"secret"''<br />
starttls enabled<br />
default<br />
mda "/usr/bin/procmail -d %T"<br />
}}<br />
<br />
Procmail needs root privileges to work in delivery mode but it is not an issue if you are running the cronjobs as root anyway.<br />
<br />
To test that everything works correctly, create a file {{ic|message.txt}} with {{ic|"test message"}} in it. <br />
<br />
From the same directory run:<br />
<br />
$ sendmail ''user_name'' < message.txt <br />
<br />
then:<br />
<br />
$ cat /var/spool/mail/''user_name''<br />
<br />
You should now see the test message and the time and date it was sent.<br />
<br />
The error output of all jobs will now be redirected to {{ic|/var/spool/mail/''user_name''}}.<br />
<br />
Due to the privileged issue, it is hard to create and send emails to root (e.g. {{ic|su -c ""}}). You can ask {{ic|esmtp}} to forward all root's email to an ordinary user with:<br />
{{hc|/etc/esmtprc|<br />
2=force_mda="''user-name''"<br />
}}<br />
<br />
{{Note|If the above test didn't work, you may try creating a local configuration in {{ic|~/.esmtprc}} with the same content.<br />
<br />
Run the following command to make sure it has the correct permission: <br />
<br />
$ chmod 710 ~/.esmtprc<br />
<br />
Then repeat the test with {{ic|message.txt}} exactly as before.}}<br />
<br />
==== Long cron job ====<br />
<br />
Suppose this program is invoked by cron :<br />
<br />
#!/bin/sh<br />
echo "I had a recoverable error!"<br />
sleep 1h<br />
<br />
What happens is this:<br />
# cron runs the script<br />
# as soon as cron sees some output, it runs your MTA, and provides it with the headers. It leaves the pipe open, because the job hasn't finished and there might be more output.<br />
# the MTA opens the connection to postfix and leaves that connection open while it waits for the rest of the body.<br />
# postfix closes the idle connection after less than an hour and you get an error like this :<br />
smtpmsg='421 … Error: timeout exceeded' errormsg='the server did not accept the mail'<br />
<br />
To solve this problem you can use the command chronic or sponge from {{Pkg|moreutils}}.<br />
From they respective man page :<br />
; chronic: chronic runs a command, and arranges for its standard out and standard error to only be displayed if the command fails (exits nonzero or crashes). If the command succeeds, any extraneous output will be hidden.<br />
; sponge: sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before opening the output file… If no output file is specified, sponge outputs to stdout.<br />
<br />
Even if it's not said chronic buffer the command output before opening its standard output (like sponge does).<br />
<br />
== Crontab format ==<br />
<br />
The basic format for a crontab is:<br />
<br />
<minute> <hour> <day_of_month> <month> <day_of_week> <command><br />
<br />
* ''minute'' values can be from 0 to 59.<br />
* ''hour'' values can be from 0 to 23.<br />
* ''day_of_month'' values can be from 1 to 31.<br />
* ''month'' values can be from 1 to 12.<br />
* ''day_of_week'' values can be from 0 to 6, with 0 denoting Sunday.<br />
<br />
Multiple times may be specified with a comma, a range can be given with a hyphen, and the asterisk symbol is a wildcard character. Spaces are used to separate fields. For example, the line:<br />
<br />
*0,*5 9-16 * 1-5,9-12 1-5 ~/bin/i_love_cron.sh<br />
<br />
Will execute the script {{Ic|i_love_cron.sh}} at five minute intervals from 9 AM to 4:55 PM on weekdays except during the summer months (June, July, and August). More examples and advanced configuration techniques can be found below.<br />
<br />
== Basic commands ==<br />
<br />
Crontabs should never be edited directly; instead, users should use the {{ic|crontab}} program to work with their crontabs. To be granted access to this command, user must be a member of the users group (see the {{ic|gpasswd}} command).<br />
<br />
To view their crontabs, users should issue the command:<br />
<br />
$ crontab -l<br />
<br />
To edit their crontabs, they may use:<br />
<br />
$ crontab -e<br />
<br />
To remove their crontabs, they should use:<br />
<br />
$ crontab -r<br />
<br />
If a user has a saved crontab and would like to completely overwrite their old crontab, he or she should use:<br />
<br />
$ crontab ''saved_crontab_filename''<br />
<br />
To overwrite a crontab from the command line ([[Wikipedia:stdin]]), use<br />
<br />
$ crontab - <br />
<br />
To edit somebody else's crontab, issue the following command as root:<br />
<br />
# crontab -u ''username'' -e<br />
<br />
This same format (appending {{ic|-u ''username''}} to a command) works for listing and deleting crontabs as well.<br />
<br />
To use [[nano]] rather than [[vi]] as crontab editor, add the following lines to your shell's initialization file (eg. {{ic|/etc/profile}} or {{ic|/etc/bash.bashrc}}):<br />
<br />
export EDITOR="/usr/bin/nano"<br />
<br />
And restart open shells.<br />
<br />
== Examples ==<br />
<br />
The entry:<br />
<br />
01 * * * * /bin/echo Hello, world!<br />
<br />
runs the command {{Ic|/bin/echo Hello, world!}} on the first minute of every hour of every day of every month (i.e. at 12:01, 1:01, 2:01, etc.)<br />
<br />
Similarly,<br />
<br />
*/5 * * jan mon-fri /bin/echo Hello, world!<br />
<br />
runs the same job every five minutes on weekdays during the month of January (i.e. at 12:00, 12:05, 12:10, etc.)<br />
<br />
As noted in the ''Crontab Format'' section, the line:<br />
<br />
*0,*5 9-16 * 1-5,9-12 1-5 /home/user/bin/i_love_cron.sh<br />
<br />
Will execute the script {{Ic|i_love_cron.sh}} at five minute intervals from 9 AM to 5 PM (excluding 5 PM itself) every weekday (Mon-Fri) of every month except during the summer (June, July, and August).<br />
<br />
== More information ==<br />
<br />
The cron daemon parses a configuration file known as {{ic|crontab}}. Each user on the system can maintain a separate crontab file to schedule commands individually. The root user's crontab is used to schedule system-wide tasks (though users may opt to use {{ic|/etc/crontab}} or the {{ic|/etc/cron.d}} directory, depending on which cron implementation they choose).<br />
<br />
There are slight differences between the crontab formats of the different cron daemons. The default root crontab for dcron looks like this:<br />
<br />
{{hc|/var/spool/cron/root<br />
|2=<nowiki><br />
# root crontab<br />
# DO NOT EDIT THIS FILE MANUALLY! USE crontab -e INSTEAD<br />
<br />
# man 1 crontab for acceptable formats:<br />
# <minute> <hour> <day> <month> <dow> <tags and command><br />
# <@freq> <tags and command><br />
<br />
# SYSTEM DAILY/WEEKLY/... FOLDERS<br />
@hourly ID=sys-hourly /usr/sbin/run-cron /etc/cron.hourly<br />
@daily ID=sys-daily /usr/sbin/run-cron /etc/cron.daily<br />
@weekly ID=sys-weekly /usr/sbin/run-cron /etc/cron.weekly<br />
@monthly ID=sys-monthly /usr/sbin/run-cron /etc/cron.monthly<br />
</nowiki>}}<br />
<br />
These lines exemplify one of the formats that crontab entries can have, namely whitespace-separated fields specifying:<br />
<br />
# @period<br />
# ID=jobname (this tag is specific to dcron)<br />
# command<br />
<br />
The other standard format for crontab entries is:<br />
<br />
# minute<br />
# hour<br />
# day<br />
# month<br />
# day of week<br />
# command<br />
<br />
The crontab files themselves are usually stored as {{ic|/var/spool/cron/username}}. For example, root's crontab is found at {{ic|/var/spool/cron/root}}<br />
<br />
See the crontab [[man page]] for further information and configuration examples.<br />
<br />
== run-parts issue ==<br />
<br />
cronie uses {{ic|run-parts}} to carry out script in {{ic|cron.daily}}/{{ic|cron.weekly}}/{{ic|cron.monthly}}. Be careful that the script name in these won't include a dot (.), e.g. {{ic|backup.sh}}, since {{ic|run-parts}} without options will ignore them (see: {{ic|man run-parts}}).<br />
<br />
== Running Xorg server based applications ==<br />
<br />
If you find that you can't run X apps from cron jobs then use this prefix:<br />
<br />
export DISPLAY=:0.0 ;<br />
<br />
This sets the {{ic|DISPLAY}} variable to the first display, which is usually right<br />
unless you run multiple X servers on your machine.<br />
<br />
If it still doesn't work, then you need to use {{ic|xhost}} to give your user control<br />
over X:<br />
<br />
# xhost +si:localuser:$(whoami)<br />
<br />
== Asynchronous job processing ==<br />
<br />
If you regularly turn off your computer but do not want to miss jobs, there are some solutions available (easiest to hardest):<br />
<br />
===Dcron===<br />
Vanilla dcron supports asynchronous job processing. Just put it with @hourly, @daily, @weekly or @monthly with a jobname, like this:<br />
<br />
@hourly ID=greatest_ever_job echo This job is very useful.<br />
<br />
===Cronwhip===<br />
([https://aur.archlinux.org/packages.php?ID=21079 AUR], [https://bbs.archlinux.org/viewtopic.php?id=57973 forum thread]): Script to automatically run missed cron jobs; works with the former default cron implementation, dcron.<br />
<br />
===Anacron===<br />
([https://aur.archlinux.org/packages.php?ID=5196 AUR]): Full replacement for dcron, processes jobs asynchronously.<br />
<br />
===Fcron===<br />
([https://www.archlinux.org/packages/community/i686/fcron/ Community], [https://bbs.archlinux.org/viewtopic.php?id=140497 forum thread]): Like anacron, fcron assumes the computer is not always running and, unlike anacron, it can schedule events at intervals shorter than a single day. Like cronwhip, it can run jobs that should have been run during the computer's downtime.<br />
<br />
== Ensuring exclusivity ==<br />
<br />
If you run potentially long-running jobs (e.g., a backup might all of a sudden run for a long time, because of many changes or a particular slow network connection), then {{AUR|lockrun}} can ensure that the cron job won't start a second time.<br />
<br />
5,35 * * * * /usr/bin/lockrun -n /tmp/lock.backup /root/make-backup.sh<br />
<br />
== See Also ==<br />
* [http://gotux.net/arch-linux/crontab-usage/ CronTab Usage Tutorial]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Netctl&diff=253616Netctl2013-04-10T15:08:43Z<p>Veox: /* Configuration */ remove typo and shorten</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Networking]]<br />
[[es:Netcfg]]<br />
[[fr:Netcfg]]<br />
[[it:Netcfg]]<br />
[[ja:Netcfg]]<br />
[[ro:Netcfg]]<br />
[[ru:Netcfg]]<br />
[[tr:netcfg]]<br />
[[zh-CN:Netcfg]]<br />
{{Article summary start}}<br />
{{Article summary text|A guide to configuring the network using netctl and network profile scripts.}}<br />
{{Article summary end}}<br />
Netctl is a new Arch project slated to replace [[netcfg]]. Users should regard it as the future of CLI-based network management on Arch Linux.<br />
<br />
==Installation==<br />
The {{Pkg|netctl}} package is available in [[Official Repositories#&#91;core&#93;|&#91;core&#93;]]. Installing netctl will replace netcfg. As of netctl version 0.7, optional dependencies include<br />
*{{Pkg|dialog}}, for menu based WiFi assistance ({{ic|wifi-menu}})<br />
*{{Pkg|dhclient}}, for DHCP support<br />
*{{Pkg|dhcpcd}}, for DHCP support (instead of dhclient)<br />
*{{Pkg|wpa_supplicant}}, for wireless network support<br />
*{{Pkg|ifplugd}}, for automatic wired connections through {{ic|netctl-ifplugd}}<br />
*{{Pkg|ifenslaved}}, for bond connections<br />
*{{Pkg|bridge-utils}}, for bridge connections<br />
*{{Pkg|ppp}}, for pppoe connection<br />
<br />
==Recommended Reading==<br />
Considerable effort has gone into the construction of quality man pages. Users are encouraged to read the following man pages prior to using netctl:<br />
*netctl<br />
*netctl.profile<br />
*netctl.special<br />
<br />
==Configuration==<br />
<br />
{{ic|netctl}} may be used to introspect and control the state of the systemd services for the network profile manager. Example configuration files are provided for the user to assist them in configuring their network connection. These example profiles are located in {{ic|/etc/netctl/examples/}}. The common configurations include:<br />
*ethernet-dhcp<br />
*ethernet-static<br />
*wireless-wpa<br />
*wireless-wpa-static<br />
<br />
To use an example profile, simply copy one of them from {{ic|/etc/netctl/examples/<profile>}} to {{ic|/etc/netctl/<profile>}} and configure it to your needs:<br />
# cp /etc/netctl/examples/wireless-wpa /etc/netctl/my-wireless-wpa<br />
<br />
Once you have created your profile, make an attempt to establish a connection using the newly created profile by running:<br />
# netctl start <profile><br />
<br />
If issuing the above command results in a failure, then use {{ic|journalctl -xn}} and {{ic|netctl status <profile>}} in order to obtain a more in depth explanation of the failure. Make the needed corrections to the failed configuration and retest.<br />
<br />
Once the profile is started successfully then it can be {{ic|enabled}} using {{ic|netctl enable <profile>}}. This will create the proper symlink for the profile to be used by {{ic|netctl-auto@.service}}.<br />
<br />
{{Note| the systemd service {{ic|netctl-auto@<interface>.service}} will need to be enabled in order to allow automatic wireless connection at boot to become functional.}}<br />
<br />
{{Note| If there is ever a need to alter a currently enabled profile. execute {{ic|netctl reenable <profile>}} to apply the changes.}}<br />
<br />
===Migrating from netcfg===<br />
<br />
{{ic|netctl}} uses {{ic|/etc/netctl}} to store its profiles, ''not'' {{ic|/etc/network.d}} ({{ic|netcfg}}'s profile storage location).<br />
<br />
In order to migrate from netcfg, at least the following is needed:<br />
*Move network profile files to the new directory.<br />
*Rename variables therein according to netctl.profile(5) (most have only become CamelCase i.e CONNECTION= becomes Connection=).<br />
*Unquote interface variables and other variables that don't strictly need quoting (this is mainly a style thing).<br />
*Run {{ic|netctl enable <profile>}} for every profile in the old NETWORKS array. 'last' doesn't work this way, see netctl.special(7).<br />
*Use {{ic|netctl list}} / {{ic|netctl start <profile>}} instead of netcfg-menu. wifi-menu remains available.<br />
<br />
===Password Encryption (256-bit PSK)===<br />
<br />
Users ''not'' wishing to have their passwords stored in ''plain text'' have the option of generating a 256-bit Encrypted PSK.<br />
<br />
If you have not done so already, install {{pkg|wpa_actiond}} from the [[Official Repositories#&#91;core&#93;|&#91;core&#93;]] repository using [[pacman]]<br />
# pacman -S wpa_actiond<br />
<br />
Next, generate your 256-bit Encrypted PSK using [[WPA_supplicant#Configuration_file|wpa_passphrase]]:<br />
{{hc|Usage: wpa_passphrase [ssid] [passphrase]|<br />
2=$ wpa_passphrase archlinux freenode|<br />
network={<br />
ssid="archlinux"<br />
#psk="freenode"<br />
psk=64cf3ced850ecef39197bb7b7b301fc39437a6aa6c6a599d0534b16af578e04a<br />
}<br />
{{Note|This information will be used in your profile so do not close the terminal}}<br />
}}<br />
<br />
In a second terminal window copy the example file {{ic|wireless-wpa}} from {{ic|/etc/netctl/examples}} to {{ic|/etc/netctl}}.<br />
# cp /etc/netctl/examples/wireless-wpa /etc/netctl/wireless-wpa<br />
<br />
You will then need to edit {{ic|/etc/netctl/wireless-wpa}} using your favorite text editor and add the ''Encrypted Pre-shared Key'' that was generated early using wpa_passphrase to the {{ic|'''Key'''}} variable of this profile.<br />
<br />
Once completed your network profile {{ic|wireless-wpa}} containing a 256-bit Encrypted PSK should resemble:<br />
{{hc|/etc/netctl/wireless-wpa|2=<br />
Description='A simple WPA encrypted wireless connection using 256-bit Encrypted PSK'<br />
Interface=wlp2s2<br />
Connection=wireless<br />
Security=wpa<br />
IP=dhcp<br />
ESSID=archlinux<br />
Key=\"64cf3ced850ecef39197bb7b7b301fc39437a6aa6c6a599d0534b16af578e04a<br />
}}<br />
{{Note|1=Make sure to use the '''special non-quoted rules''' for Key= that are explained at the end of netctl.profile(5)}}<br />
<br />
==Support==<br />
Official announcement thread: https://bbs.archlinux.org/viewtopic.php?id=157670</div>Veoxhttps://wiki.archlinux.org/index.php?title=Udev&diff=102341Udev2010-04-09T16:43:26Z<p>Veox: /* Other Resources */ add link to mailing list</p>
<hr />
<div>[[Category:Hardware detection and troubleshooting (English)]]<br />
[[Category:HOWTOs (English)]]<br />
[[Category:Auto-mounting (English)]]<br />
{{i18n|Udev}}<br />
<br />
== Introduction ==<br />
''"udev is the device manager for the Linux 2.6 kernel series. Primarily, it manages device nodes in {{Filename|/dev}}. It is the successor of devfs and hotplug, which means that it handles the {{Filename|/dev}} directory and all user space actions when adding/removing devices, including firmware load."'' Source: [http://en.wikipedia.org/wiki/Udev Wikipedia]<br />
<br />
udev replaces the functionality of both {{Codeline|hotplug}} and {{Codeline|hwdetect}}.<br />
<br />
udev loads kernel modules simultaneously, which can provide a speed increase during bootup. However, the downside is that it doesn't always load modules in the same order each time, which can cause problems with things like sound cards and network cards (if you have more than one of them). See below for more info on this.<br />
<br />
==About modules auto-loading==<br />
udev will not do ''any'' module loading for you unless {{Codeline|MOD_AUTOLOAD}} is enabled in {{Filename|/etc/rc.conf}}. If you disable auto-loading you must manually load the modules you want/need by putting the list in the {{Codeline|MODULES}} array in {{Filename|[[rc.conf]]}}, you can generate this list with the {{Codeline|hwdetect --modules}} command.<br />
<br />
==About udev rules==<br />
udev rules go in {{Filename|/etc/udev/rules.d/}}, their file name has to end with {{Filename|.rules}}.<br />
<br />
If you want to learn how to write udev rules see [http://www.reactivated.net/writing_udev_rules.html Writing udev rules].<br />
<br />
To get a list of all the attributes of a device you can use to write rules:<br />
# udevadm info -a -p $(udevadm info -q path -n [device name])<br />
<br />
Replace [device name] with the device present in the system, such as '/dev/sda' or '/dev/ttyUSB0'.<br />
<br />
To restart the udev system once you create or modify udev rules, run the following command. Hotpluggable devices, such as USB devices, will probably have to be reconnected for the new rules to take effect.<br />
# udevadm control restart<br />
<br />
== Tips & Tricks ==<br />
=== Auto mounting USB devices ===<br />
{{Note|In the following rules the mount options are defined as {{Codeline|<nowiki>ENV{mount_options}="relatime,users"</nowiki>}}, see {{Codeline|man mount}} (and possibly {{Codeline|man ntfs-3g}}) for all available options and [[Maximizing Performance#noatime, nodiratime and relatime mount options]] to know why you should use {{Codeline|relatime}}.}}<br />
==== Mounting to {{Filename|/media}}; uses partition label; supports luks encryption ====<br />
This udev rules set automatically mounts devices/partitions that are represented by /dev/sd* (usb sticks, external hard drives and usually also sdcards). If a partition label is available, it mounts the device to /media/<label> and otherwise to /media/usbhd-sd*, e.g. /media/usbhd-sdb1. If the plugged in device is a luks encrypted partition, it will open a xterm window to ask for the passphrase provided that xterm is installed. Also see [http://bbs.archlinux.org/viewtopic.php?pid=696239#p696239 this post] and the follow-ups.<br />
{{File|name=/etc/udev/rules.d/11-media-by-label-auto-mount.rules|content=<nowiki><br />
KERNEL!="sd[a-z]*", GOTO="media_by_label_auto_mount_end"<br />
ACTION=="add", PROGRAM!="/sbin/blkid %N", GOTO="media_by_label_auto_mount_end"<br />
<br />
# Open luks partition if necessary<br />
PROGRAM=="/sbin/blkid -o value -s TYPE %N", RESULT=="crypto_LUKS", ENV{crypto}="mapper/", ENV{device}="/dev/mapper/%k"<br />
ENV{crypto}!="?*", ENV{device}="%N"<br />
ACTION=="add", ENV{crypto}=="?*", PROGRAM=="/usr/bin/xterm -display :0.0 -e 'echo Password for /dev/%k; /usr/sbin/cryptsetup luksOpen %N %k'"<br />
ACTION=="add", ENV{crypto}=="?*", TEST!="/dev/mapper/%k", GOTO="media_by_label_auto_mount_end"<br />
<br />
# Global mount options<br />
ACTION=="add", ENV{mount_options}="noatime,users"<br />
# Filesystem specific options<br />
ACTION=="add", PROGRAM=="/sbin/blkid -o value -s TYPE %E{device}", RESULT=="vfat|ntfs", ENV{mount_options}="%E{mount_options},utf8,gid=100,umask=002"<br />
<br />
# Get label<br />
ACTION=="add", PROGRAM=="/sbin/blkid -o value -s LABEL %E{device}", ENV{dir_name}="%c"<br />
# use basename to correctly handle labels such as ../mnt/foo<br />
ACTION=="add", PROGRAM=="/usr/bin/basename '%E{dir_name}'", ENV{dir_name}="%c"<br />
ACTION=="add", ENV{dir_name}!="?*", ENV{dir_name}="usbhd-%k"<br />
<br />
ACTION=="add", ENV{dir_name}=="?*", RUN+="/bin/mkdir -p '/media/%E{dir_name}'", RUN+="/bin/mount -o %E{mount_options} /dev/%E{crypto}%k '/media/%E{dir_name}'"<br />
ACTION=="remove", ENV{dir_name}=="?*", RUN+="/bin/umount -l '/media/%E{dir_name}'", RUN+="/bin/rmdir '/media/%E{dir_name}'"<br />
ACTION=="remove", ENV{crypto}=="?*", RUN+="/usr/sbin/cryptsetup luksClose %k"<br />
LABEL="media_by_label_auto_mount_end"<br />
</nowiki>}}<br />
==== Mounting to {{Filename|/media}}; uses partition label, supports user unmounting ====<br />
This is a variation on the above ruleset. It uses pmount instead of mount to allow user unmounting of udev-mounted devices. The required username must be hard-coded in the RUN command, so this ruleset may not be suitable for multi-user systems. Luks support has also been removed, but can of course be reinstated as above.<br />
{{File|name=/etc/udev/rules.d/11-media-by-label-with-pmount.rules|content=<nowiki><br />
KERNEL!="sd[a-z]*", GOTO="media_by_label_auto_mount_end"<br />
ACTION=="add", PROGRAM!="/sbin/blkid %N", GOTO="media_by_label_auto_mount_end"<br />
<br />
# Global mount options<br />
ACTION=="add", ENV{mount_options}="noatime,users"<br />
# Filesystem specific options<br />
ACTION=="add", PROGRAM=="/sbin/blkid -o value -s TYPE %N", RESULT=="vfat|ntfs", ENV{mount_options}="%E{mount_options},utf8,gid=100,umask=002"<br />
<br />
# Get label<br />
ACTION=="add", PROGRAM=="/sbin/blkid -o value -s LABEL %N", ENV{dir_name}="%c"<br />
# use basename to correctly handle labels such as ../mnt/foo<br />
ACTION=="add", PROGRAM=="/usr/bin/basename '%E{dir_name}'", ENV{dir_name}="%c"<br />
ACTION=="add", ENV{dir_name}!="?*", ENV{dir_name}="usbhd-%k"<br />
<br />
ACTION=="add", ENV{dir_name}=="?*", RUN+="/bin/su tomk -c '/usr/bin/pmount %N %E{dir_name}'"<br />
ACTION=="remove", ENV{dir_name}=="?*", RUN+="/bin/su tomk -c '/usr/bin/pumount /media/%E{dir_name}'"<br />
LABEL="media_by_label_auto_mount_end"<br />
</nowiki>}}<br />
<br />
{{Warning| '''ALL''' of the following is outdated and will not work anymore. While it shows some very neat tricks, you will have to change many things because the program 'vol_id' is not delivered with udev anymore. In fact, it has been replaced by 'blkid' which is part of the util-linux-ng package. However, 'blkid' is much different from vol_id.}}<br />
<br />
==== Mounting to {{Filename|/mnt/usbhd-sdXY}} ====<br />
{{File|name=/etc/udev/rules.d/11-mnt-auto-mount.rules|content=<nowiki><br />
KERNEL!="sd[a-z][0-9]", GOTO="mnt_auto_mount_end"<br />
ACTION=="add", RUN+="/bin/mkdir -p /media/usbhd-%k", RUN+="/bin/ln -s /media/usbhd-%k /mnt/usbhd-%k"<br />
<br />
# Global mount options<br />
ACTION=="add", ENV{mount_options}="relatime,users"<br />
# Filesystem specific options<br />
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id -t %N", RESULT=="vfat|ntfs", ENV{mount_options}="$env{mount_options},utf8,gid=100,umask=002"<br />
<br />
ACTION=="add", RUN+="/bin/mount -o $env{mount_options} /dev/%k /mnt/usbhd-%k"<br />
ACTION=="remove", RUN+="/bin/umount -l /mnt/usbhd-%k", RUN+="/bin/rmdir /mnt/usbhd-%k"<br />
LABEL="mnt_auto_mount_end"<br />
</nowiki>}}<br />
<br />
==== Mounting to {{Filename|/mnt/usbhd-sdXY}} and creating a symbolic link in {{Filename|/media}} ====<br />
{{File|name=/etc/udev/rules.d/11-mnt-media-auto-mount.rules|content=<nowiki><br />
KERNEL!="sd[a-z][0-9]", GOTO="mnt_media_auto_mount_end"<br />
<br />
# Global mount options<br />
ACTION=="add", ENV{mount_options}="relatime,users"<br />
# Filesystem specific options<br />
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id -t %N", RESULT=="vfat|ntfs", ENV{mount_options}="$env{mount_options},utf8,gid=100,umask=002"<br />
<br />
ACTION=="add", RUN+="/bin/mkdir -p /mnt/usbhd-%k", RUN+="/bin/mount -o $env{mount_options} /dev/%k /mnt/usbhd-%k", RUN+="/bin/ln -s /mnt/usbhd-%k /media/usbhd-%k"<br />
ACTION=="remove", RUN+="/bin/rm -f /media/usbhd-%k", RUN+="/bin/umount -l /mnt/usbhd-%k", RUN+="/bin/rmdir /mnt/usbhd-%k"<br />
LABEL="mnt_media_auto_mount_end"<br />
</nowiki>}}<br />
<br />
==== Mounting to {{Filename|/media}} using the partition label if it exists ====<br />
Another version creating dir only in /media and mounting with the label’s partition if it exists (fallbacks to usbhd-sdXY otherwise):<br />
{{File|name=/etc/udev/rules.d/11-media-by-label-auto-mount.rules|content=<nowiki><br />
KERNEL!="sd[a-z][0-9]", GOTO="media_by_label_auto_mount_end"<br />
<br />
# Global mount options<br />
ACTION=="add", ENV{mount_options}="relatime,users"<br />
# Filesystem specific options<br />
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id -t %N", RESULT=="vfat|ntfs", ENV{mount_options}="$env{mount_options},utf8,gid=100,umask=002"<br />
<br />
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id --label %N", ENV{dir_name}="%c"<br />
ACTION=="add", PROGRAM!="/lib/initcpio/udev/vol_id --label %N", ENV{dir_name}="usbhd-%k"<br />
ACTION=="add", RUN+="/bin/mkdir -p /media/%E{dir_name}", RUN+="/bin/mount -o $env{mount_options} /dev/%k /media/%E{dir_name}"<br />
ACTION=="remove", ENV{dir_name}=="?*", RUN+="/bin/umount -l /media/%E{dir_name}", RUN+="/bin/rmdir /media/%E{dir_name}"<br />
LABEL="media_by_label_auto_mount_end"<br />
</nowiki>}}<br />
<br />
==== Mounting to {{Filename|/media}} only if the partition has a label ====<br />
{{File|name=/etc/udev/rules.d/11-media-by-label-only-auto-mount.rules|content=<nowiki><br />
KERNEL!="sd[a-z][0-9]", GOTO="media_by_label_only_auto_mount_end"<br />
ACTION=="add", PROGRAM!="/lib/initcpio/udev/vol_id --label %N", GOTO="media_by_label_only_auto_mount_end"<br />
ACTION=="add", RUN+="/bin/mkdir -p /media/$env{ID_FS_LABEL}"<br />
<br />
# Global mount options<br />
ACTION=="add", ENV{mount_options}="relatime,users"<br />
# Filesystem specific options<br />
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id -t %N", RESULT=="vfat|ntfs", ENV{mount_options}="$env{mount_options},utf8,gid=100,umask=002"<br />
<br />
ACTION=="add", RUN+="/bin/mount -o $env{mount_options} /dev/%k /media/$env{ID_FS_LABEL}"<br />
ACTION=="remove", ENV{ID_FS_LABEL}=="?*", RUN+="/bin/umount -l /media/$env{ID_FS_LABEL}", RUN+="/bin/rmdir /media/$env{ID_FS_LABEL}"<br />
LABEL="media_by_label_only_auto_mount_end"<br />
</nowiki>}}<br />
<br />
==== Mounting SD cards ====<br />
The same rules as above can be used to auto-mount SD cards, you just need to replace {{Codeline|sd[a-z][0-9]}} by {{Codeline|mmcblk[0-9]p[0-9]}}:<br />
{{File|name=/etc/udev/rules.d/11-sd-cards-auto-mount.rules|content=<nowiki><br />
KERNEL!="mmcblk[0-9]p[0-9]", GOTO="sd_cards_auto_mount_end"<br />
ACTION=="add", RUN+="/bin/mkdir -p /media/sd-%k", RUN+="/bin/ln -s /media/sd-%k /mnt/sd-%k"<br />
<br />
# Global mount options<br />
ACTION=="add", ENV{mount_options}="relatime,users"<br />
# Filesystem specific options<br />
ACTION=="add", PROGRAM=="/lib/initcpio/udev/vol_id -t %N", RESULT=="vfat|ntfs", ENV{mount_options}="$env{mount_options},utf8,gid=100,umask=002"<br />
<br />
ACTION=="add", RUN+="/bin/mount -o $env{mount_options} /dev/%k /media/sd-%k"<br />
ACTION=="remove", RUN+="/bin/umount -l /media/sd-%k", RUN+="/bin/rmdir /media/sd-%k"<br />
LABEL="sd_cards_auto_mount_end"<br />
</nowiki>}}<br />
<br />
==== Accessing Firmware Programmers and USB Virtual Comm Devices ====<br />
The following ruleset will allow normal users (within the "users" group) the ability to access the [http://www.ladyada.net/make/usbtinyisp/ USBtinyISP] USB programmer for AVR microcontrollers and a generic (SiLabs [http://www.silabs.com/products/interface/usbtouart CP2102]) USB to UART adapter. Adjust the permissions accordingly. Verified as of 2010-02-11.<br />
<br />
{{File|name=/etc/udev/rules.d/50-embedded_devices.rules|content=<nowiki><br />
# USBtinyISP Programmer rules<br />
SUBSYSTEMS=="usb", ATTRS{idVendor}=="1781", ATTRS{idProduct}=="0c9f", GROUP="users", MODE="0666"<br />
SUBSYSTEMS=="usb", ATTRS{idVendor}=="16c0", ATTRS{idProduct}=="0479", GROUP="users", MODE="0666"<br />
<br />
# Mdfly.com Generic (SiLabs CP2102) 3.3v/5v USB VComm adapter<br />
SUBSYSTEMS=="usb", ATTRS{idVendor}=="10c4", ATTRS{idProduct}=="ea60", GROUP="users", MODE="0666"<br />
</nowiki>}}<br />
<br />
==Troubleshooting==<br />
=== Disabling modules auto-loading with the load_modules boot parameter ===<br />
If you pass {{Codeline|<nowiki>load_modules=off</nowiki>}} on your kernel boot line, then udev will skip all the auto-loading business. This is to provide you with a big ripcord to pull if something goes wrong. If udev loads a problematic module that hangs your system or something equally awful, then you can bypass auto-loading with this parameter, then go in and blacklist the offensive module(s).<br />
<br />
=== Blacklisting Modules ===<br />
In rare cases, Udev can make mistakes and load the wrong modules. To prevent it from doing this, you can blacklist modules. Once blacklisted, udev will never load that module. Not at boot-time ''or'' later on when a hotplug event is received (ie, you plug in your USB flash drive).<br />
<br />
To blacklist a module, just prefix it with a bang (!) in your {{Codeline|MODULES}} array in {{Filename|[[rc.conf]]}}:<br />
MODULES=(!moduleA !moduleB)<br />
<br />
=== Known Problems with Hardware ===<br />
====BusLogic devices can be broken and will cause a freeze during startup====<br />
This is a kernel bug and no fix has been provided yet.<br />
====PCMCIA Card readers are not treated as removable devices====<br />
To get access to them with hal's pmount backend add them to {{Filename|/etc/pmount.allow}}<br />
<br />
=== Known Problems with Auto-Loading ===<br />
==== CPU frequency modules ====<br />
The current detection method for the various CPU frequency controllers is inadequate, so this has been omitted from the auto-loading process for the time being. To use CPU frequency scaling, load the proper module explicitly in your {{Codeline|MODULES}} array in {{Filename|[[rc.conf]]}}.<br />
<br />
==== Sound Problems or Some Modules Not Loaded Automatically ====<br />
Some users have traced this problem to old entries in {{Codeline|/etc/modprobe.conf}}. Try cleaning that file out and trying again.<br />
<br />
==== Mixed Up Devices, Sound/Network Cards Changing Order Each Boot ====<br />
Because udev loads all modules asynchronously, they are initialized in a different order. This can result in devices randomly switching names. For example, with two network cards, you may notice a switching of designations between {{Codeline|eth0}} and {{Codeline|eth1}}.<br />
<br />
Arch Linux provides the advantage of specifying the module load order by listing the modules in the {{Codeline|MODULES}} array in {{Filename|[[rc.conf]]}}. Modules in this array are loaded before udev begins auto-loading, so you have full control over the load order.<br />
<br />
# Always load 8139too before e100<br />
MODULES=(8139too e100)<br />
<br />
Another method for network card ordering is to use the udev-sanctioned method of statically-naming each interface. Create the following file to bind the MAC address of each of your cards to a certain interface name:<br />
{{File|name=/etc/udev/rules.d/10-network.rules|content=<nowiki><br />
SUBSYSTEM=="net", ATTR{address}=="aa:bb:cc:dd:ee:ff", NAME="lan0"<br />
SUBSYSTEM=="net", ATTR{address}=="ff:ee:dd:cc:bb:aa", NAME="wlan0"<br />
</nowiki>}}<br />
<br />
A couple things to note:<br />
* To get the MAC address of each card, use this command: {{Codeline|udevadm info -a -p /sys/class/net/<yourdevice> | grep address}}<br />
* Make sure to use the lower-case hex values in your udev rules. It doesn't like upper-case.<br />
* Some people have problems naming their interfaces after the old style: eth0, eth1, etc. Try something like "lan" or "wlan" if you experience this problem.<br />
<br />
Don't forget to update your {{Filename|/etc/rc.conf}} and other configuration files using the old ethX notation!<br />
<br />
=== Known Problems for Custom Kernel Users ===<br />
==== Udev doesn't start at all ====<br />
Make sure you have a kernel version later than or equal to 2.6.15. Earlier kernels do not have the necessary uevent stuff that udev needs for auto-loading.<br />
<br />
==== CD/DVD symlinks and permissions are broken ====<br />
If you're using a 2.6.15 kernel, you'll need the uevent patch from ABS (which backports certain uevent functionality from 2.6.16). Just sync up your ABS tree with the {{Codeline|abs}} command, then you'll find the patch in {{Codeline|/var/abs/kernels/kernel26/}}.<br />
<br />
==Other Resources==<br />
* [http://www.kernel.org/pub/linux/utils/kernel/hotplug/udev.html Udev Homepage]<br />
* [http://www.linux.com/news/hardware/peripherals/180950-udev An Introduction to Udev]<br />
* [http://vger.kernel.org/vger-lists.html#linux-hotplug Udev mailing list information]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Awesome_(window_manager)&diff=91469Awesome (window manager)2010-01-09T13:46:33Z<p>Veox: /* Transparency */ awesome 3.4, signals, common conky problems</p>
<hr />
<div>[[Category:Desktop environments (English)]]<br />
[[Category:HOWTOs (English)]]<br />
{{i18n_links_start}}<br />
{{i18n_entry|Česky|Awesome (Česky)}}<br />
{{i18n_entry|English|Awesome3}}<br />
{{i18n_entry|简体中文|Awesome_(简体中文)}}<br />
{{i18n_links_end}}<br />
<br />
From the awesome website:<br />
<br />
"''[http://awesome.naquadah.org/ awesome] is a highly configurable, next generation framework window manager for X. It is very fast, extensible and licensed under the GNU GPLv2 license.<br />
<br />
It is primarly targeted at power users, developers and any people dealing with every day computing tasks and who want to have fine-grained control on its graphical environment.''"<br />
<br />
==Installation==<br />
<br />
Awesome 3.x is available in the community repositories, just:<br />
# pacman -S awesome<br />
<br />
===Awesome-git===<br />
Git-based development versions are available from AUR, see [http://aur.archlinux.org/packages.php?ID=13916 awesome-git] <br />
<br />
There are several ways to install them.<br />
The easiest is to use "yaourt" (also available in the AUR). If you have yaourt installed, simply:<br />
# yaourt -S awesome-git<br />
<br />
==Getting Started==<br />
<br />
===Using awesome===<br />
To run awesome without a login manager, simply add '''<tt>exec awesome</tt>''' to the startup script of your choice (e.g. ~/.xinitrc.)<br />
<br />
If you have problems with some devices (like mounting usbkeys, reading dvds) be sure to read documentation about [[HAL]] and policykit. When you don't use a login manager, nothing is automated. In some cases, using '''<tt>exec ck-launch-session awesome</tt>''' can solve your problems.<br />
<br />
To start awesome from a login manager, see [[Adding a login manager (KDM, GDM, or XDM) to automatically boot on startup | this article]]. <br />
<br />
'''[[SLIM]]''' is a popular lightweight login manager and comes highly recommended. You should do like this:<br />
<br />
1) Edit /etc/slim.conf for start awesome session, add awesome to sessions line. <br>For example: <br />
sessions awesome,wmii,xmonad<br />
2) Edit ~/.xinitrc file <br />
DEFAULT_SESSION=awesome<br />
case $1 in<br />
awesome) exec awesome ;;<br />
wmii) exec wmii ;;<br />
xmonad) exec xmonad ;;<br />
*) exec $DEFAULT_SESSION ;;<br />
esac<br />
However, you can also start awesome as preferred user without any login manager and even without logging in, after editing ~/.xinitrc and /etc/inittab properly. Refer to the article [[Start X at boot]].<br />
<br />
==Configuration==<br />
Awesome includes some good default settings right out of the box, but sooner or later you'll want to change something. The lua based configuration file is at <tt>~/.config/awesome/rc.lua</tt>.<br />
<br />
===Creating the configuration file===<br />
Whenever compiled, awesome will attempt to use whatever custom settings are contained in ~/.config/awesome/rc.lua. This file is not created by default, so we must copy the template file first:<br />
$ cp /etc/xdg/awesome/rc.lua ~/.config/awesome/rc.lua<br />
<br />
The syntax of the configuration often changes when awesome updates. So, remember to repeate the command above when you get something strange with awesome, or you'd like to modify the configuration.<br />
<br />
For more information about configuring awesome, check out the [http://awesome.naquadah.org/wiki/Awesome_3_configuration configuration page at awesome wiki]<br />
<br />
===More configuration resources===<br />
{{Note|The syntax of awesome configuration changes regularly, so you will likely have to modify any file you download.}}<br />
<br />
Some good examples of rc.lua would be as follows:<br />
<br />
* http://git.sysphere.org/awesome-configs/tree/ - Awesome 3.4 configurations from Adrian C. (anrxc)<br />
* http://pastebin.com/f6e4b064e - Darthlukan's awesome 3.4 configuration. <br />
* http://www.calmar.ws/dotfiles/dotfiledir/dot_awesomerc.lua<br />
* http://github.com/wolgri/wolgri.config/tree/master/.config/awesome/rc.lua<br />
* http://oxmoz.no-ip.org/awesome/rc.lua<br />
* http://www.ugolnik.info/downloads/awesome/rc.lua (screen) - Awesome 3 with small titlebar and statusbar.<br />
* http://github.com/bash/dotfiles/blob/master/.config/awesome/rc.lua<br />
* http://github.com/nblock/config/blob/master/.config/awesome/rc.lua<br />
* User Configuration Files http://awesome.naquadah.org/wiki/User_Configuration_Files<br />
<br />
===Debug rc.lua using Xephyr===<br />
<br />
This is my prefered way to debug rc.lua, without breaking my curent desktop. I first copy my rc.lua into a new file, rc.lua.new, and modify it as needed. Then, I run new instance of awesome in Xephyr (allows you to run X nested in another X's client window - [http://www.dante4d.cz/pub/screenie/2009-08-01-025216_1920x1200_scrot.png screenshot]), supplying rc.lua.new as a config file like this:<br />
<br />
$ Xephyr -ac -br -noreset -screen 1152x720 :1 &<br />
$ DISPLAY=:1.0 awesome -c ~/.config/awesome/rc.lua.new<br />
<br />
Big advantage of this approach is that if I break rc.lua.new, I don't break my current awesome desktop (and possibly crash all my X apps, lose all unsaved things and so on...). Once I'm happy with my new settings, I move rc.lua.new to rc.lua and restart awesome. And I can be sure it will work and restarting with new config won't mess up things.<br />
<br />
==Themes==<br />
<br />
Beautiful is a lua library that allows you to theme awesome using an external file, it becomes very easy to dynamically change your whole awesome colours and wallpaper without changing your rc.lua. <br />
<br />
The default theme is at /usr/share/awesome/themes/default. Copy it to ~/.config/awesome/themes/default and change theme_path in rc.lua. <br />
<br />
More details [http://awesome.naquadah.org/wiki/Beautiful here]<br />
<br />
A few sample [http://awesome.naquadah.org/wiki/Beautiful_themes themes]<br />
<br />
===Setting up your wallpaper===<br />
<br />
Beautiful can handle your wallpaper, thus you don't need to set it up in your .xinitrc or .xsession files. This allows you to have a specific wallpaper for each theme. If you take a look at the default theme file you'll see a wallpaper_cmd key, the given command is executed when beautiful.init("path_to_theme_file") is run. You can put here you own command or remove/comment the key if you don't want Beautiful to interfere with your wallpaper business.<br />
<br />
For instance, if you use awsetbg to set your wallpaper, you can write:<br />
<br />
wallpaper_cmd = { "awsetbg -f .config/awesome/themes/awesome-wallpaper.png" }<br />
<br />
====Random Background Image====<br />
To rotate the wallpapers randomly, just comment the wallpaper_cmd line above, and add a script into your .xinitrc with the codes below:<br />
<pre><br />
while true;<br />
do<br />
awsetbg -r <path/to/the/directory/of/your/wallpapers><br />
sleep 15m<br />
done &<br />
</pre><br />
<br />
==Tips & Tricks==<br />
Feel free to add any tips or tricks that you would like to pass on to other awesome users.<br />
<br />
===Expose effect like compiz===<br />
<br />
Revelation brings up a view of all your open clients; left-clicking a client pops to the first tag that client is visible on and raises/focuses the client. In addition, the Enter key pops to the currently focused client, and Escape aborts. <br />
<br />
http://awesome.naquadah.org/wiki/Revelation<br />
<br />
===Hide / show wibox in awesome 3===<br />
<br />
To map Modkey-b to hide/show default statusbar on active screen (as default in awesome 2.3), add to your ''clientkeys'' in rc.lua:<br />
<br />
awful.key({ modkey }, "b", function ()<br />
mywibox[mouse.screen].visible = not mywibox[mouse.screen].visible<br />
end),<br />
<br />
===Enable printscreens===<br />
<br />
To enable printscreens in awesome through the PrtScr button you need to have a screen capturing program.<br />
Scrot is a easy to use utility for this purpose and is available in Arch repositories.<br />
<br />
Just type:<br />
# pacman -S scrot<br />
<br />
and install optional dependencies if you feel that you need them.<br />
<br />
Next of we need to get the key name for PrtScr, most often this is named "Print" but one can never be too sure.<br />
<br />
Start up:<br />
# xev<br />
<br />
And press the PrtScr button, the output should be something like:<br />
KeyPress event ....<br />
root 0x25c, subw 0x0, ...<br />
state 0x0, keycode 107 (keysym 0xff61, '''Print'''), same_screen YES,<br />
....<br />
<br />
In my case as you see, the keyname is Print.<br />
<br />
Now to the configuration of awesome!<br />
<br />
Somewhere in your globalkeys array (doesn't mather where) type:<br />
<br />
Lua code:<br />
<br />
awful.key({ }, "Print", function () awful.util.spawn("scrot -e 'mv $f ~/screenshots/ 2>/dev/null'") end),<br />
<br />
A good place to place this is bellow the keyhook for spawning a terminal.<br />
To find this line search for: awful.util.spawn(terminal) in your favourite text editor.<br />
<br />
Also, this function saves screenshots inside ~/screenshots/, edit this to fit your needs.<br />
<br />
===Dynamic tagging using Eminent===<br />
<br />
TODO...<br />
[http://awesome.naquadah.org/wiki/index.php?title=Eminent]<br />
<br />
Note: Eminent is a bit old and outdated. We have a new library in the works, Shifty. It hasn't been included with the main source yet, but is very promising. I'd recommend waiting for that and then writing this section for it. [http://garoth.com/awesome/shifty.ogv Shifty Video (.ogv)]<br />
<br />
Note2: There is actually yet another implementation of this feature. I'm hoping it merges with Shifty, and I'm poking the developers in that direction.<br />
<br />
===Space Invaders===<br />
Space Invaders is a demo to show the possibilities of the Awesome Lua API.<br />
[http://awesome.naquadah.org/wiki/Space_Invaders]<br />
<br />
Please note that it is no longer included in the Awesome package since the 3.4-rc1 release.<br />
<br />
===Naughty for popup notification===<br />
TODO<br />
[http://awesome.naquadah.org/wiki/index.php?title=Naughty]<br />
<br />
===Popup Menus===<br />
There's a simple menu by default in awesome3, and customed menus seem very easy now. However, if you're using 2.x awesome, have a look at ''[http://awesome.naquadah.org/wiki/index.php?title=Awful.menu awful.menu]''.<br />
<br />
An example for awesome3:<br />
<pre><br />
myawesomemenu = {<br />
{ "lock", "xscreensaver-command -activate" },<br />
{ "manual", terminal .. " -e man awesome" },<br />
{ "edit config", editor_cmd .. " " .. awful.util.getdir("config") .. "/rc.lua" },<br />
{ "restart", awesome.restart },<br />
{ "quit", awesome.quit }<br />
}<br />
<br />
mycommons = {<br />
{ "pidgin", "pidgin" },<br />
{ "OpenOffice", "soffice-dev" },<br />
{ "Graphic", "gimp" }<br />
}<br />
<br />
mymainmenu = awful.menu.new({ items = { <br />
{ "terminal", terminal },<br />
{ "icecat", "icecat" },<br />
{ "Editor", "gvim" },<br />
{ "File Manager", "pcmanfm" },<br />
{ "VirtualBox", "VirtualBox" },<br />
{ "Common App", mycommons, beautiful.awesome_icon },<br />
{ "awesome", myawesomemenu, beautiful.awesome_icon }<br />
}<br />
})<br />
</pre><br />
<br />
===More Widgets in awesome===<br />
''Widgets in awesome are objects that you can add to any widget-box (statusbars and titlebars), they can provide various information about your system, and are useful for having access to this information, right from your window manager. Widgets are simple to use and offer a great deal of flexibility.'' -- Source [http://awesome.naquadah.org/wiki/Widgets_in_awesome Awesome Wiki: Widgets].<br />
<br />
There's a widely used widget library called '''Wicked''' (compatible with awesome versions '''prior to 3.4'''), that provides more widgets, like MPD widget, CPU usage, memory usage, etc. For more details see the [http://awesome.naquadah.org/wiki/index.php?title=Wicked Wicked page].<br />
<br />
As a replacement for Wicked in awesome v3.4 check '''[http://awesome.naquadah.org/wiki/Vicious Vicious]''', '''[http://awesome.naquadah.org/wiki/Obvious Obvious]''' and '''[http://awesome.naquadah.org/wiki/Bashets Bashets]'''. If you pick vicious, you are also suggested to take a good look at [http://git.sysphere.org/vicious/tree/README vicious documentation].<br />
<br />
===Transparency===<br />
Awesome has support for (2D) transparency through xcompmgr. Note that you'll probably want the git version of xcompmgr, which is [http://aur.archlinux.org/packages.php?ID=16554 available in AUR]. <br />
<br />
Add this to your ~/.xinitrc:<br />
exec xcompmgr &<br />
See ''man xcompmgr'' or [[xcompmgr]] for more options.<br />
<br />
In awesome 3.4, window transparency can be set dynamically using signals. For example, your rc.lua could contain the following:<br />
<br />
client.add_signal("focus", function(c)<br />
c.border_color = beautiful.border_focus<br />
c.opacity = 1<br />
end)<br />
client.add_signal("unfocus", function(c)<br />
c.border_color = beautiful.border_normal<br />
c.opacity = 0.7<br />
end)<br />
<br />
Note that if you are using conky, you must set it to create its own window instead of using the desktop. To do so, edit ~/.conkyrc to contain:<br />
<br />
own_window yes<br />
own_window_transparent yes<br />
own_window_type desktop<br />
<br />
Otherwise strange behavior may be observed, such as all windows becoming fully transparent. Note also that since conky will be creating a transparent window on your desktop, any actions defined in awesome's rc.lua for the desktop will not work where conky is.<br />
<br />
As of Awesome 3.1, there is built-in pseudo-transparency for wiboxes. To enable it, append 2 hexadecimal digits to the colors in your theme file (~/.config/awesome/themes/default, which is usually a copy of /usr/share/awesome/themes/default), like shown here:<br />
<br />
bg_normal = #000000AA<br />
<br />
where "AA" is the transparency value.<br />
<br />
===Autorun programs===<br />
Just add the following codes in the rc.lua, and replace the applications in the autorunApps section with anything you like. Example:<br />
-- Autorun programs<br />
autorun = true<br />
autorunApps = <br />
{ <br />
"swiftfox",<br />
"mutt",<br />
"consonance",<br />
"linux-fetion",<br />
"weechat-curses",<br />
}<br />
if autorun then<br />
for app = 1, #autorunApps do<br />
awful.util.spawn(autorunApps[app])<br />
end<br />
end<br />
or like this:<br />
os.execute("mutt &"),<br />
os.execute("weechat-curses &"),<br />
<br />
To execute an application only once, e.g. for restarting awesome, use this function (from the [http://awesome.naquadah.org/wiki/Autostart awesome wiki]):<br />
function run_once(prg)<br />
if not prg then<br />
do return nil end<br />
end<br />
os.execute("x=" .. prg .. "; pgrep -u $USER -x " .. prg .. " || (" .. prg .. " &)")<br />
end<br />
-- AUTORUN APPS!<br />
run_once("parcellite")<br />
<br />
===Passing content to widgets with awesome-client===<br />
<br />
You can easily send text to an awesome widget. Just create a new widget:<br />
<pre><br />
mywidget = widget({ type = "textbox", name = "mywidget" })<br />
mywidget.text = "initial text"<br />
</pre><br />
To update the text from an external source, use awesome-client:<br />
<pre> <br />
echo -e 'mywidget.text = "new text"' | awesome-client<br />
</pre><br />
Don't forget to add the widget to your wibox.<br />
<br />
==Troubleshooting==<br />
<br />
===Mod4 key===<br />
<br />
The Mod4 is by default the '''Win key'''. If it's not mapped by default, for some reason, you can check the keycode of your Mod4 key with<br />
<br />
$ xev<br />
<br />
It should be 115 for the left one. Then add this to your ~/.xinitrc<br />
<br />
xmodmap -e "keycode 115 = Super_L" -e "add mod4 = Super_L"<br />
exec awesome<br />
<br />
====Mod4 key vs. IBM ThinkPad users====<br />
<br />
IBM ThinkPads do not come equipped with a Window key (although Lenovo have changed this tradition on their ThinkPads). As of writing, the Alt key is not used in command combinations by the default rc.lua (refer to the Awesome wiki for a table of commands), which allows it be used as a replacement for the Super/Mod4/Win key. To do this, edit your rc.lua and replace:<br />
<br />
modkey = "Mod4"<br />
<br />
by:<br />
<br />
modkey = "Mod1"<br />
<br />
Note: Awesome does a have a few commands that make use of Mod4 plus a single letter. Changing Mod4 to Mod1/Alt could cause overlaps for some key combinations. The small amount of instances where this happens can be changed in the rc.lua file.<br />
<br />
If you don't like to change the awesome standards, you might like to remap a key. For instance the caps lock key is rather useless (for me) adding the following contents to ~/.Xmodmap <br />
<br />
clear lock <br />
add mod4 = Caps_Lock<br />
<br />
and [http://wiki.archlinux.org/index.php/Extra_Keyboard_Keys_in_Xorg#Introduction_2 (re)load] the file.<br />
This will change the caps lock key into the mod4 key and works nicely with the standard awesome settings. In addition, if needed, it provides the mod4 key to other X-programs as well.<br />
<br />
===Cairo Memory Leak===<br />
If you experiencing [http://awesome.naquadah.org/bugs/index.php?do=details&task_id=396 memory leaks] then try [http://aur.archlinux.org/packages.php?ID=9566 cairo-git] in AUR. [http://bbs.archlinux.org/viewtopic.php?pid=462021 Forum Thread]<br />
<br />
'''Update''': The recent Cairo 1.8.6 release is also fine to use it seems as the fix from git should be in there.<br />
<br />
==External Links==<br />
* http://awesome.naquadah.org/wiki/FAQ - FAQ<br />
* http://www.lua.org/pil/ - Programming in Lua (first edition)<br />
* http://awesome.naquadah.org/ - The official awesome website<br />
* http://awesome.naquadah.org/wiki/Main_Page - the awesome wiki<br />
* http://www.penguinsightings.org/desktop/awesome/ - A review</div>Veoxhttps://wiki.archlinux.org/index.php?title=Lisp_package_guidelines&diff=71175Lisp package guidelines2009-06-25T11:43:30Z<p>Veox: /* ASDF */</p>
<hr />
<div>[[Category:Package development (English)]]<br />
[[Category:Guidelines (English)]]<br />
== Background ==<br />
<br />
At the moment, there are relatively few lisp packages available in the<br />
Arch repositories. This means that at some point or another, more will<br />
likely appear. It is useful, therefore, to figure out now, while there<br />
are few packages, how they should be packaged. Therefore, this page<br />
stands as a proposed packaging guideline for lisp packages. Keep in<br />
mind, however, that this is a work in progress; if you disagree with<br />
some of the ideas suggested here, feel free to edit the page and<br />
propose something better.<br />
<br />
== Directory Structure and Naming ==<br />
<br />
There is at least one package in the base repository (libgpg-error)<br />
that includes lisp files, which are placed in<br />
'''/usr/share/common-lisp/source/gpg-error'''. In keeping with this,<br />
other lisp packages should also place their files in<br />
'''/usr/share/common-lisp/source/'''. Each package should have its own<br />
directory, so as not to clutter up this base directory.<br />
<br />
The package directory<br />
should be the name of the lisp package, not what it's called in the<br />
Arch repository (or AUR). This applies even to single-file packages.<br />
<br />
For example, a Lisp package called '''cl-ppcre''' should be called<br />
'''cl-ppcre''' in AUR and reside in '''/usr/share/common-lisp/source/cl-ppcre'''.<br />
A Lisp package called '''alexandria''' should be called '''cl-alexandria'''<br />
in AUR and reside in '''/usr/share/common-lisp/source/alexandria'''.<br />
<br />
== ASDF ==<br />
<br />
Try to avoid the usage of ASDF-Install as a means of installing these<br />
system-wide packages.<br />
<br />
ASDF itself may be necessary or helpful as a means of compiling and/or<br />
loading packages. In that case, it is suggested that the directory<br />
used for the central registry (the location of all of the symlinks <br />
to *.asd) be '''/usr/share/common-lisp/systems/'''.<br />
<br />
However, I have observed problems with doing the compilation with asdf<br />
as a part of the package compilation process. However, it does work<br />
during an install, through use of a package.install file. Such a file<br />
might look like this:<br />
<br />
# cl-ppcre.install<br />
# arg 1: the new package version<br />
post_install() {<br />
echo "---> Compiling lisp files <---"<br />
<br />
clisp --silent -norc -x \<br />
"(load #p\"/usr/share/common-lisp/source/asdf/asdf\") \<br />
(pushnew #p\"/usr/share/common-lisp/systems/\" asdf:*central-registry* :test #'equal) \<br />
(asdf:operate 'asdf:compile-op 'cl-ppcre)"<br />
<br />
echo "---> Done compiling lisp files <---"<br />
<br />
cat << EOM<br />
<br />
To load this library, load asdf and then place the following lines<br />
in your ~/.clisprc.lisp file:<br />
<br />
(push #p"/usr/share/common-lisp/systems/" asdf:*central-registry*)<br />
(asdf:operate 'asdf:load-op 'cl-ppcre)<br />
EOM<br />
}<br />
<br />
post_upgrade() {<br />
post_install $1<br />
}<br />
<br />
pre_remove() {<br />
rm /usr/share/common-lisp/source/cl-ppcre/{*.fas,*.lib}<br />
}<br />
<br />
op=$1<br />
shift<br />
<br />
$op $*<br />
<br />
Of course, for this example to work, there needs to be a symlink to<br />
package.asd in the asdf system directory. During package compilation,<br />
a stanza such as this will do the trick...<br />
<br />
pushd ${_lispdir}/systems<br />
ln -s ../source/cl-ppcre/cl-ppcre.asd .<br />
ln -s ../source/cl-ppcre/cl-ppcre-test.asd .<br />
popd<br />
<br />
...where ''$_lispdir'' is '''${startdir}/pkg/usr/share/common-lisp'''.<br />
By linking to a relative, rather than an absolute, path, it's possible<br />
to guarantee that the link will not break post-install.<br />
<br />
== Lisp-specific packaging ==<br />
<br />
When possible, do not make packages specific to a single lisp<br />
implementation; try to be as cross-platform as the package itself will<br />
allow. If, however, the package is specifically designed for a single<br />
lisp implementation (i.e., the developers haven't gotten around to<br />
adding support for others yet, or the package's purpose is<br />
specifically to provide a capability that is built in to another lisp<br />
implementation), it is appropriate to make your Arch package<br />
lisp-specific.<br />
<br />
To avoid making packages implementation-specific, ideally all<br />
implementation packages (SBCL, cmucl, clisp) would be built with the<br />
PKGBUILD field '''common-lisp'''. However, that's not the case (and<br />
that would likely cause problems for people who prefer to have<br />
multiple lisps at their fingertips). In the meantime, you could (a)<br />
not make your package depend on *any* lisp and include a statement in<br />
the package.install file telling folks to make sure they have a lisp<br />
installed (not ideal), or (b) Take direction from the ''sbcl''<br />
PKGBUILD and include a conditional statement to figure out which lisp<br />
is needed (which is hackish and, again, far from ideal). Other ideas<br />
are welcome.<br />
<br />
Also note that if ASDF is needed to install/compile/load the package,<br />
things could potentially get awkward as far as dependencies go, since<br />
SBCL comes with asdf installed, clisp does not but there is an AUR<br />
package, and CMUCL may or may not have it (the author of this doc.<br />
knows next to nothing about CMUCL; sorry).<br />
<br />
People currently maintaining lisp-specific packages that don't need to<br />
be lisp-specific should consider doing at least one of the following:<br />
<br />
* Editing their PKGBUILD(s) to be cross-platform, provided someone else is not already maintaining the same package for a different lisp.<br />
<br />
* Offering to take over maintenance or help with maintenance of the same package for a different lisp, and then combining them into a single package.<br />
<br />
* Offering up their package to the maintainer of a different lisp's version of the same package, so as to allow that person to combine them into a single package.<br />
<br />
(Note that joyfulgirl, the author of this doc., currently maintains<br />
clisp versions of cl-ppcre and of stumpwm; she is open to either<br />
giving up the packages to the maintainers of the SBCL versions or to<br />
maintain the new, cross-platform versions herself if the SBCL-version<br />
maintainers don't want to).<br />
<br />
== Things you, the reader, can do ==<br />
<br />
* Maintain lisp packages following these guidelines<br />
* Update and fix problems with these guidelines<br />
* Keep up with what's changed here<br />
* Provide (polite) thoughts, feedback, and suggestions both on this document and to people's work.<br />
* Translate this page and future updates to this page.</div>Veoxhttps://wiki.archlinux.org/index.php?title=Privoxy&diff=68543Privoxy2009-05-11T09:15:40Z<p>Veox: /* Tor and Privoxy in Firefox */ Reference to Torbutton, editing.</p>
<hr />
<div>[[Category:Networking (English)]]<br />
[[Category:HOWTOs (English)]]<br />
=About=<br />
There might be some situations where you want to be completely anonymous while using Internet. One way to go about this is using Tor and Privoxy.<br />
<br />
''From Wikipedia, the free encyclopedia:''<br />
<br />
'''Tor''' is an implementation of second-generation onion routing - an anonymity system enabling its users to communicate anonymously on the Internet.<br />
<br />
Users of the Tor network run an onion proxy on their machine. This software connects out to Tor, periodically negotiating a virtual circuit through the Tor network. Tor employs cryptography in a layered manner (hence the 'onion' analogy), ensuring perfect forward secrecy between routers. At the same time, the onion proxy software presents a SOCKS interface to its clients. SOCKS-aware applications may be pointed at Tor, which then multiplexes the traffic through a Tor virtual circuit.<br />
<br />
'''Privoxy''' is a filtering proxy for the HTTP protocol, frequently used in combination with Tor. Privoxy is a web proxy with advanced filtering capabilities for protecting privacy, filtering web page content, managing cookies, controlling access, and removing ads, banners, pop-ups, etc. It supports both stand-alone systems and multi-user networks.<br />
<br />
Using privoxy is necessary because browsers leak your DNS requests when they use a SOCKS proxy directly, which is bad for your anonymity.<br />
=Installation and setup=<br />
First, go to http://whatsmyip.net/ and write down your IP address.<br />
$ pacman -Sy tor privoxy<br />
Edit your /etc/privoxy/config file and add this line at the end:<br />
forward-socks4a / localhost:9050 .<br />
Make sure your /etc/hosts is correctly set up. By default in Arch, "hostname" has the name "localhost" but you need to make sure it has the name you used in your /etc/rc.conf.<br />
<br />
E.g. in the Arch default rc.conf HOSTNAME="myhost", so in /etc/hosts it should be:<br />
#<ip-address> <hostname.domain.org> <hostname><br />
127.0.0.1 myhost.localdomain myhost localhost<br />
<br />
Add tor and privoxy to your DAEMONS array in /etc/rc.conf<br />
DAEMONS=(syslog-ng ... privoxy tor)<br />
<br />
Start them both with<br />
# /etc/rc.d/tor start<br />
# /etc/rc.d/privoxy start<br />
or restart your computer.<br />
<br />
=Tor and Privoxy in Firefox=<br />
The easiest way to do this is to use the [https://addons.mozilla.org/firefox/2275/ Torbutton] extension.<br />
<br />
Alternatively, you can use [https://addons.mozilla.org/firefox/125/ SwitchProxy Tool]. After restarting Firefox you will have a new toolbar. Click ''Add'', select ''Standard proxy type''. Choose whatever ''Proxy Label'' you want, e.g ''Tor''. Enter into both the ''HTTP Proxy'' and ''SSL Proxy'' fields:<br />
<br />
Hostname: 127.0.0.1 Port: 8118<br />
<br />
This will point Firefox at Privoxy. You can also add exeptions in the ''No Proxy for'' field.<br />
<br />
Now, return to http://whatsmyip.net/ and check so that your IP is diffrent from before.<br />
<br />
=Another Tor testing link=<br />
You can check that you are using Tor by pointing your browser to [http://serifos.eecs.harvard.edu/cgi-bin/ipaddr.pl?tor=1 this address] or [https://torcheck.xenobite.eu/ this].<br />
<br />
=Tor and Privoxy in other applications=<br />
You can also use this setup in other applications like instant messaging, Jabber, IRC, etc.<br />
<br />
Applications that support HTTP proxies you can point at Privoxy (127.0.0.1 port 8118).<br />
<br />
To use SOCKS proxy directly, you can point your application at Tor (127.0.0.1 port 9050). A problem with this method though is that applications doing DNS resolves by themselves may leak information. Consider using Socks4A (e.g. via privoxy) instead.<br />
<br />
=Links=<br />
Tor - http://www.torproject.org/<br />
<br />
Privoxy - http://www.privoxy.org/</div>Veoxhttps://wiki.archlinux.org/index.php?title=Talk:Tor&diff=68542Talk:Tor2009-05-11T09:00:50Z<p>Veox: Merger.</p>
<hr />
<div>== Merging with the 'Proxy routing with Tor and Privoxy'' article ==<br />
<br />
* Agreed. The other one seems to be slightly better-written, and its title reflects the content better. --[[User:Veox|Veox]] 05:00, 11 May 2009 (EDT)</div>Veoxhttps://wiki.archlinux.org/index.php?title=Graphics_tablet&diff=68474Graphics tablet2009-05-09T13:26:53Z<p>Veox: Started section on WALTOP tablet support by wacom.</p>
<hr />
<div>[[Category:Input devices (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
= Introduction =<br />
<br />
Before we begin, I would like to point out that this guide is only for a ''USB'' based Wacom tablets. Futhermore, you can either setup a static ''Xorg'' configuration, meaning things may not work if later on you plug your Wacom tablet into a different ''USB'' port, or follow the dynamic instructions further down. Finally this guide is based on my experience of installing my ''Graphire4'' tablet, so others may like to add things specific to other Wacom tablets. I do welcome others to update this wiki to include a wider range of information.<br />
<br />
I'd also like to mention that this wiki is very much influenced by the very helpful [http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet], which I recommend anyone visit if they would like to learn about things that are not covered here.<br />
<br />
= Installing =<br />
<br />
== Install Linuxwacom ==<br />
<br />
Thanks to [http://linuxwacom.sourceforge.net The Linux Wacom Project], you only need to install the ''linuxwacom'' package, which contains everything needed to use a Wacom tablet on Linux. You can get this from the [[AUR]].<br />
<br />
== Configure Xorg ==<br />
<br />
''Note: static configuration is deprecated by the X.org project. Consider using HAL (see below) instead to get hotplugging and automatic configuration.''<br />
<br />
Again, I'd like to make note that I only cover how to setup a static ''Xorg'' configuration, meaning things may not work if later on you plug your Wacom tablet into a different USB port.<br />
<br />
InputDevice "cursor" "SendCoreEvents"<br />
InputDevice "stylus" "SendCoreEvents"<br />
InputDevice "eraser" "SendCoreEvents"<br />
Firstly, add these to the ''ServerLayout'' section of your ''Xorg'' config (/etc/X11/xorg.conf).<br />
<br />
cat /proc/bus/input/devices<br />
Now we need to determine the location of your tablet ''device''. Run the command above, and take note of the ''event'' number of the ''Handlers'' row. We will use this to set the correct device in our ''Xorg'' config below.<br />
<br />
I: Bus=0003 Vendor=056a Product=0016 Version=0403<br />
N: Name="Wacom Graphire4 6x8"<br />
P: Phys=<br />
S: Sysfs=/class/input/input7<br />
H: Handlers=mouse2 event7 ts2 <br />
B: EV=1f<br />
B: KEY=1c63 0 70011 0 0 0 0 0 0 0 0<br />
B: REL=100<br />
B: ABS=100 3000003<br />
B: MSC=1<br />
Here is an example of the output for my ''Graphire4'' tablet. From this, we can determine that my tablet device goes through ''/dev/input/event7''.<br />
<br />
Section "InputDevice"<br />
Identifier "stylus"<br />
Driver "wacom"<br />
Option "Type" "stylus"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "Threshold" "5"<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Identifier "eraser"<br />
Driver "wacom"<br />
Option "Type" "eraser"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "Threshold" "5"<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Identifier "cursor"<br />
Driver "wacom"<br />
Option "Type" "cursor"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
EndSection<br />
Now update your ''Xorg'' config (/etc/X11/xorg.conf) as above.<br />
<br />
To learn about each of the Wacom tablet ''Xorg'' options checkout the ''man pages'' found at [http://linuxwacom.sourceforge.net/index.php/howto/inputdev Linux Wacom Project HOWTO - 5.1 - Adding the InputDevices].<br />
<br />
I recommend you checkout [http://linuxwacom.sourceforge.net/index.php/howto/x11 Linux Wacom Project HOWTO - 5.0 - Configuring X11], I also recommend you checkout [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Xorg Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg].<br />
<br />
==== TwinView Setup ====<br />
<br />
If you are going to use two Monitors the apsect ratio while using the Tablet might feel unnatural. In order to fix this you need to add<br />
<br />
Option "TwinView" "horizontal"<br />
<br />
To all of your Wacom-InputDevice entries in the xorg.conf file.<br />
You may read more about that [http://ubuntuforums.org/showthread.php?t=640898 HERE]<br />
<br />
=== Graphire4 buttons ===<br />
<br />
InputDevice "pad" "SendCoreEvents"<br />
Add this to the ''ServerLayout'' section of your ''Xorg'' config (/etc/X11/xorg.conf).<br />
<br />
*Note, it was mentioned at [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Graphire4_buttons Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg - Graphire4 buttons] that there was information somewhere advising NOT to add "SendCoreEvents" to the line above, but it was also said that without this these buttons will not work.<br />
<br />
Section "InputDevice"<br />
Identifier "pad"<br />
Driver "wacom"<br />
Option "Type" "pad"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "ButtonsOnly" "on"<br />
EndSection<br />
Now update your ''Xorg'' config (/etc/X11/xorg.conf) as above.<br />
<br />
I recommend you checkout [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Graphire4_buttons Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg - Graphire4 buttons].<br />
<br />
=== Xorg crashes when logging in ===<br />
<br />
In case that happens to you you need to apply this patch to Xorg and recompile it (from http://sourceforge.net/tracker/index.php?func=detail&aid=1843335&group_id=69596&atid=525124):<br />
<br />
diff -ur xorg-server-1.4.orig/xkb/xkbLEDs.c xorg-server-1.4/xkb/xkbLEDs.c<br />
--- xorg-server-1.4.orig/xkb/xkbLEDs.c 2007-11-01 20:49:02.000000000<br />
+0100<br />
+++ xorg-server-1.4/xkb/xkbLEDs.c 2007-11-01 20:48:03.000000000<br />
+0100<br />
@@ -63,6 +63,9 @@<br />
<br />
sli= XkbFindSrvLedInfo(dev,XkbDfltXIClass,XkbDfltXIId,0);<br />
<br />
+ if (!sli)<br />
+ return 0;<br />
+<br />
if (state_changes&(XkbModifierStateMask|XkbGroupStateMask))<br />
update|= sli->usesEffective;<br />
if (state_changes&(XkbModifierBaseMask|XkbGroupBaseMask))<br />
<br />
Be advised that this patch could lead to unexpected behavior. It's no official patch and only use it at your own risk.<br />
<br />
<br />
=== Tablet devices still do not appear ===<br />
<br />
Start ''Xorg'' with tablet connected. Then look at logs (/var/log/Xorg.0.log) and search for those errors:<br />
<br />
Error opening /dev/input/wacom : Success<br />
(EE) xf86OpenSerial: Cannot open device /dev/input/wacom<br />
No such file or directory.<br />
<br />
This error will show even when device exists.<br />
<br />
Second error is<br />
<br />
usbDetect: can not ioctl version<br />
Wacom xf86WcmWrite error : Invalid argument<br />
<br />
If there are those errors, check if your wacom device is /dev/input/ts3 or another ts device (or symlink to this device). If it is, then device is handled by Compaq touchscreen emulation. This is ''tsdev'' module.<br />
Just unload the module<br />
<br />
modprobe -r tsdev<br />
<br />
and add this module to blacklist in /etc/rc.conf<br />
<br />
==Dynamic (udev) Xorg setup==<br />
Again thanks to [http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet] for the information! This was done with a ''Volito2'', and so reflects the experiences with that tablet, but it should work for any tablet supported by the linuxwacom project.<br />
<br />
'''Note''': The linuxwacom package from AUR already includes a udev-rules-file, so you might skip this part and move on to the xorg.conf configuration if you're using the linuxwacom package from AUR<br />
<br />
Install ''udev'' from the repositories.<br />
Run the ''lsusb'' command. It should return something like this<br />
<br />
Bus 002 Device 007: ID 056a:0062 Wacom Co., Ltd<br />
Bus 002 Device 006: ID 03eb:0902 Atmel Corp.<br />
Bus 002 Device 005: ID 0bc2:0503 Seagate RSS LLC<br />
Bus 002 Device 004: ID 05e3:0660 Genesys Logic, Inc. USB 2.0 Hub<br />
Bus 002 Device 001: ID 1d6b:0002<br />
Bus 003 Device 001: ID 1d6b:0001<br />
Bus 001 Device 003: ID 06a3:8000 Saitek PLC<br />
Bus 001 Device 002: ID 045e:00d1 Microsoft Corp.<br />
Bus 001 Device 001: ID 1d6b:0001<br />
You can see from here my tablet among other devices. We are interested in the tablet and mouse - unless there is no mouse attached to the said computer of course.<br />
Next make the file ''10-local.rules'' in ''/etc/udev/rules.d''. You need to add these two lines<br />
<br />
KERNEL=="event*", SYSFS{idVendor}=="056a", NAME="input/%k", SYMLINK="input/wacom"<br />
KERNEL=="mouse*", SYSFS{idProduct}=="045e", NAME="input/%k", SYMLINK="input/mouse_udev"<br />
Of course you need to change '056a' and '045e' to what lsusb returns for you - I used the VendorID for my tablet and the ProductID for my mouse.<br />
Save the file and start udev using the command ''/etc/start_udev''<br />
Check to make sure that it has appeared in ''/dev/input''.<br />
<br />
bash-3.2# cd /dev/input<br />
bash-3.2# ls<br />
by-id event0 event2 event4 event6 event8 mouse0 mouse2 wacom<br />
by-path event1 event3 event5 event7 mice mouse1 mouse_udev<br />
You can even check that the device works by<br />
<br />
# cat wacom<br />
It should make lots of odd characters appear onscreen.<br />
If it works, then all that is left to do is add the relevent information to ''/etc/X11/xorg.conf''<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "stylus"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "stylus"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "tilt" "on" # add this if your tablet supports tilt<br />
Option "Threshold" "5" # the official linuxwacom howto advises this line<br />
EndSection<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "eraser"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "eraser"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "tilt" "on" # add this if your tablet supports tilt<br />
Option "Threshold" "5" # the official linuxwacom howto advises this line<br />
EndSection<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "cursor"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "cursor"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
EndSection<br />
Make sure that you also change the path (''"Device"'') to your mouse, as it will be ''/dev/input/mouse_udev'' now.<br />
<br />
Section "InputDevice"<br />
Identifier "Mouse1"<br />
Driver "mouse"<br />
Option "CorePointer"<br />
Option "Device" "/dev/input/mouse_udev"<br />
Option "SendCoreEvents" "true"<br />
Option "Protocol" "IMPS/2"<br />
Option "ZAxisMapping" "4 5"<br />
Option "Buttons" "5"<br />
EndSection<br />
Add this to the ''ServerLayout'' section<br />
<br />
InputDevice "cursor" "SendCoreEvents" <br />
InputDevice "stylus" "SendCoreEvents"<br />
InputDevice "eraser" "SendCoreEvents"<br />
And finally make sure to update the indentifier of your mouse in the ''ServerLayout'' section - as mine went from<br />
<br />
InputDevice "Mouse0" "CorePointer"<br />
To<br />
<br />
InputDevice "Mouse1" "CorePointer"<br />
<br />
==Xorg input hotplugging setup==<br />
<br />
To use a Wacom/WALTOP/N-Trig tablet with Xorg hotplugging create /etc/hal/fdi/policy/10-tablet.fdi with this code:<br />
<br />
<?xml version="1.0" encoding="UTF-8"?> &lt;!-- -*- SGML -*- --><br />
<br />
<deviceinfo version="0.2"><br />
<device><br />
<match key="info.capabilities" contains="input"><br />
<match key="info.product" contains="Wacom"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
<match key="info.product" contains="WALTOP"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
</match><br />
<!-- N-Trig Duosense Electromagnetic Digitizer --><br />
<match key="info.product" contains="HID 1b96:0001"><br />
<match key="info.parent" contains="if0"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
</match><br />
</device><br />
</deviceinfo><br />
<br />
Then kill your X server, restart HAL and start the X server again.<br />
<br />
= WALTOP tablet support by the Wacom drivers =<br />
<br />
Since lately Wacom drivers cannot be started with WALTOP tablets, although the functionality is present. This is due to a vendor check in the ''wacom '' X.org driver. To bypass this check, download the ''linuxwacom'' sources (for example, the ''linuxwacom-cvs'' package from AUR) and apply a similar patch to src/xdrv/wcmUSB.c:<br />
<br />
% cat wcmUSB.c.diff <br />
528,529c528,529<br />
< /* vendor is wacom */<br />
< if (sID[1] == 0x056A)<br />
---<br />
> /* vendor is wacom or waltop*/<br />
> if (sID[1] == 0x056A || sID[1] == 0x172f)<br />
<br />
<br />
= The GIMP =<br />
<br />
To enabled proper usage, and pressure sensitive painting in [http://www.gimp.org The GIMP], just go to ''"Preferences -> Input Devices -> Configure Extended Input Devices..."''. Now for each of your ''eraser'', ''stylus'', and ''cursor'' '''devices''', set the '''mode''' to ''Screen'', and remember to save.<br />
<br />
*Please take note that if present, the ''pad'' '''device''' should be kept disabled as I don't think The GIMP supports such things. Alternatively, to use such features of your tablet you should map them to keyboard commands with a program such as [http://hem.bredband.net/devel/wacom/ Wacom ExpressKeys].<br />
<br />
*You should also take note that the tool selected for the ''stylus'' is independent to that of the ''eraser''. This can actually be quite handy, as you can have the ''eraser'' set to be used as any tool you like.<br />
<br />
I recommend you checkout [http://linuxwacom.sourceforge.net/index.php/howto/gimp Linux Wacom Project HOWTO - 10.0 - Working With Gimp], and the ''Setting up GIMP'' section of [http://www.gimptalk.com/forum/topic.php?t=17992&start=1 GIMP Talk - Community - Install Guide: Getting Wacom Drawing Tablets To Work In Gimp].<br />
<br />
= Inkscape =<br />
<br />
As in The GIMP, to do the same simply got to ''"File -> Input Devices..."''. Now for each of your ''eraser'', ''stylus'', and ''cursor'' '''devices''', set the '''mode''' to ''Screen'', and remember to save.<br />
<br />
= Krita = <br />
<br />
To get your tablet working in Krita, simply go to ''"Settings -> configure Krita..."'' Click on ''Tablet'' and then like in Inkscape and GIMP set ''stylus'' and any others' mode to screen.<br />
<br />
== Bamboo ==<br />
<br />
'''Note''' Some users reported problems with linuxwacom 0.8.1-1 and Bamboo. Their Cursor jumped around when trying to use the stylus-tilt to avoid that problem simply use linuxwacom 0.8.0 (You can simply edit the pkgver in the PKGBUILD)<br />
<br />
If you use an older version of linuxwacom it could happen that you will not be able to use your pen with GIMP or Inkscape when configured as above since the stylus is firing a button2 event instead of a button1 event, same with the eraser. To correct this add (don't just copy and paste the whole section, just add the part about the buttons) these lines to the appropriate section of your xorg.conf.<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "stylus"<br />
Option "Button1" "1" #this line is important<br />
Option "Button2" "1" #this line is important<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "eraser"<br />
Option "Button1" "1" #this line is important<br />
Option "Button2" "1" #this line is important<br />
EndSection<br />
<br />
Be adviced that this way pressure sensitive painting in GIMP and Inkscape will work but the lower button of the pen will also fire a button1 event same as the stylus and eraser. You can not configure any other button for Button2, it got to be the same as Button1! There is no need to add these lines to the cursor section since the bamboo doesn't ship a mouse, still i advice you not to remove the cursor device as an input device, not even from the serverlayout section. That lead to an unstable xserver in my case.<br />
<br />
= Linuxwacom 0.8.1 bug =<br />
<br />
If you have trouble with the linuxwacom 0.8.1 beta/developer version driver as reported [http://bbs.archlinux.org/viewtopic.php?pid=421375 HERE], such as when you apply pressure the cursor freezes then you should try the linuxwacom 0.8.0 production version. Just download the existing PKGBUILD and other files from the [http://aur.archlinux.org/packages/linuxwacom/linuxwacom/ AUR] and change '''pkgver''' from 0.8.1 to '''0.8.0''', and the first '''md5sum''' from 4b78f1b66f6e9097a393cf1e3cdf87a3 to '''1d89b464392515492bb7b97c20e68d4e'''.<br />
<br />
= References =<br />
*[http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet]<br />
*[http://linuxwacom.sourceforge.net/index.php/howto/main Linux Wacom Project HOWTO]<br />
*[http://www.gimptalk.com/forum/topic.php?t=17992&start=1 GIMP Talk - Community - Install Guide: Getting Wacom Drawing Tablets To Work In Gimp]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Graphics_tablet&diff=68473Graphics tablet2009-05-09T13:16:45Z<p>Veox: /* Xorg input hotplugging setup */</p>
<hr />
<div>[[Category:Input devices (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
= Introduction =<br />
<br />
Before we begin, I would like to point out that this guide is only for a ''USB'' based Wacom tablets. Futhermore, you can either setup a static ''Xorg'' configuration, meaning things may not work if later on you plug your Wacom tablet into a different ''USB'' port, or follow the dynamic instructions further down. Finally this guide is based on my experience of installing my ''Graphire4'' tablet, so others may like to add things specific to other Wacom tablets. I do welcome others to update this wiki to include a wider range of information.<br />
<br />
I'd also like to mention that this wiki is very much influenced by the very helpful [http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet], which I recommend anyone visit if they would like to learn about things that are not covered here.<br />
<br />
= Installing =<br />
<br />
== Install Linuxwacom ==<br />
<br />
Thanks to [http://linuxwacom.sourceforge.net The Linux Wacom Project], you only need to install the ''linuxwacom'' package, which contains everything needed to use a Wacom tablet on Linux. You can get this from the [[AUR]].<br />
<br />
== Configure Xorg ==<br />
<br />
''Note: static configuration is deprecated by the X.org project. Consider using HAL (see below) instead to get hotplugging and automatic configuration.''<br />
<br />
Again, I'd like to make note that I only cover how to setup a static ''Xorg'' configuration, meaning things may not work if later on you plug your Wacom tablet into a different USB port.<br />
<br />
InputDevice "cursor" "SendCoreEvents"<br />
InputDevice "stylus" "SendCoreEvents"<br />
InputDevice "eraser" "SendCoreEvents"<br />
Firstly, add these to the ''ServerLayout'' section of your ''Xorg'' config (/etc/X11/xorg.conf).<br />
<br />
cat /proc/bus/input/devices<br />
Now we need to determine the location of your tablet ''device''. Run the command above, and take note of the ''event'' number of the ''Handlers'' row. We will use this to set the correct device in our ''Xorg'' config below.<br />
<br />
I: Bus=0003 Vendor=056a Product=0016 Version=0403<br />
N: Name="Wacom Graphire4 6x8"<br />
P: Phys=<br />
S: Sysfs=/class/input/input7<br />
H: Handlers=mouse2 event7 ts2 <br />
B: EV=1f<br />
B: KEY=1c63 0 70011 0 0 0 0 0 0 0 0<br />
B: REL=100<br />
B: ABS=100 3000003<br />
B: MSC=1<br />
Here is an example of the output for my ''Graphire4'' tablet. From this, we can determine that my tablet device goes through ''/dev/input/event7''.<br />
<br />
Section "InputDevice"<br />
Identifier "stylus"<br />
Driver "wacom"<br />
Option "Type" "stylus"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "Threshold" "5"<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Identifier "eraser"<br />
Driver "wacom"<br />
Option "Type" "eraser"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "Threshold" "5"<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Identifier "cursor"<br />
Driver "wacom"<br />
Option "Type" "cursor"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
EndSection<br />
Now update your ''Xorg'' config (/etc/X11/xorg.conf) as above.<br />
<br />
To learn about each of the Wacom tablet ''Xorg'' options checkout the ''man pages'' found at [http://linuxwacom.sourceforge.net/index.php/howto/inputdev Linux Wacom Project HOWTO - 5.1 - Adding the InputDevices].<br />
<br />
I recommend you checkout [http://linuxwacom.sourceforge.net/index.php/howto/x11 Linux Wacom Project HOWTO - 5.0 - Configuring X11], I also recommend you checkout [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Xorg Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg].<br />
<br />
==== TwinView Setup ====<br />
<br />
If you are going to use two Monitors the apsect ratio while using the Tablet might feel unnatural. In order to fix this you need to add<br />
<br />
Option "TwinView" "horizontal"<br />
<br />
To all of your Wacom-InputDevice entries in the xorg.conf file.<br />
You may read more about that [http://ubuntuforums.org/showthread.php?t=640898 HERE]<br />
<br />
=== Graphire4 buttons ===<br />
<br />
InputDevice "pad" "SendCoreEvents"<br />
Add this to the ''ServerLayout'' section of your ''Xorg'' config (/etc/X11/xorg.conf).<br />
<br />
*Note, it was mentioned at [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Graphire4_buttons Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg - Graphire4 buttons] that there was information somewhere advising NOT to add "SendCoreEvents" to the line above, but it was also said that without this these buttons will not work.<br />
<br />
Section "InputDevice"<br />
Identifier "pad"<br />
Driver "wacom"<br />
Option "Type" "pad"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "ButtonsOnly" "on"<br />
EndSection<br />
Now update your ''Xorg'' config (/etc/X11/xorg.conf) as above.<br />
<br />
I recommend you checkout [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Graphire4_buttons Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg - Graphire4 buttons].<br />
<br />
=== Xorg crashes when logging in ===<br />
<br />
In case that happens to you you need to apply this patch to Xorg and recompile it (from http://sourceforge.net/tracker/index.php?func=detail&aid=1843335&group_id=69596&atid=525124):<br />
<br />
diff -ur xorg-server-1.4.orig/xkb/xkbLEDs.c xorg-server-1.4/xkb/xkbLEDs.c<br />
--- xorg-server-1.4.orig/xkb/xkbLEDs.c 2007-11-01 20:49:02.000000000<br />
+0100<br />
+++ xorg-server-1.4/xkb/xkbLEDs.c 2007-11-01 20:48:03.000000000<br />
+0100<br />
@@ -63,6 +63,9 @@<br />
<br />
sli= XkbFindSrvLedInfo(dev,XkbDfltXIClass,XkbDfltXIId,0);<br />
<br />
+ if (!sli)<br />
+ return 0;<br />
+<br />
if (state_changes&(XkbModifierStateMask|XkbGroupStateMask))<br />
update|= sli->usesEffective;<br />
if (state_changes&(XkbModifierBaseMask|XkbGroupBaseMask))<br />
<br />
Be advised that this patch could lead to unexpected behavior. It's no official patch and only use it at your own risk.<br />
<br />
<br />
=== Tablet devices still do not appear ===<br />
<br />
Start ''Xorg'' with tablet connected. Then look at logs (/var/log/Xorg.0.log) and search for those errors:<br />
<br />
Error opening /dev/input/wacom : Success<br />
(EE) xf86OpenSerial: Cannot open device /dev/input/wacom<br />
No such file or directory.<br />
<br />
This error will show even when device exists.<br />
<br />
Second error is<br />
<br />
usbDetect: can not ioctl version<br />
Wacom xf86WcmWrite error : Invalid argument<br />
<br />
If there are those errors, check if your wacom device is /dev/input/ts3 or another ts device (or symlink to this device). If it is, then device is handled by Compaq touchscreen emulation. This is ''tsdev'' module.<br />
Just unload the module<br />
<br />
modprobe -r tsdev<br />
<br />
and add this module to blacklist in /etc/rc.conf<br />
<br />
==Dynamic (udev) Xorg setup==<br />
Again thanks to [http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet] for the information! This was done with a ''Volito2'', and so reflects the experiences with that tablet, but it should work for any tablet supported by the linuxwacom project.<br />
<br />
'''Note''': The linuxwacom package from AUR already includes a udev-rules-file, so you might skip this part and move on to the xorg.conf configuration if you're using the linuxwacom package from AUR<br />
<br />
Install ''udev'' from the repositories.<br />
Run the ''lsusb'' command. It should return something like this<br />
<br />
Bus 002 Device 007: ID 056a:0062 Wacom Co., Ltd<br />
Bus 002 Device 006: ID 03eb:0902 Atmel Corp.<br />
Bus 002 Device 005: ID 0bc2:0503 Seagate RSS LLC<br />
Bus 002 Device 004: ID 05e3:0660 Genesys Logic, Inc. USB 2.0 Hub<br />
Bus 002 Device 001: ID 1d6b:0002<br />
Bus 003 Device 001: ID 1d6b:0001<br />
Bus 001 Device 003: ID 06a3:8000 Saitek PLC<br />
Bus 001 Device 002: ID 045e:00d1 Microsoft Corp.<br />
Bus 001 Device 001: ID 1d6b:0001<br />
You can see from here my tablet among other devices. We are interested in the tablet and mouse - unless there is no mouse attached to the said computer of course.<br />
Next make the file ''10-local.rules'' in ''/etc/udev/rules.d''. You need to add these two lines<br />
<br />
KERNEL=="event*", SYSFS{idVendor}=="056a", NAME="input/%k", SYMLINK="input/wacom"<br />
KERNEL=="mouse*", SYSFS{idProduct}=="045e", NAME="input/%k", SYMLINK="input/mouse_udev"<br />
Of course you need to change '056a' and '045e' to what lsusb returns for you - I used the VendorID for my tablet and the ProductID for my mouse.<br />
Save the file and start udev using the command ''/etc/start_udev''<br />
Check to make sure that it has appeared in ''/dev/input''.<br />
<br />
bash-3.2# cd /dev/input<br />
bash-3.2# ls<br />
by-id event0 event2 event4 event6 event8 mouse0 mouse2 wacom<br />
by-path event1 event3 event5 event7 mice mouse1 mouse_udev<br />
You can even check that the device works by<br />
<br />
# cat wacom<br />
It should make lots of odd characters appear onscreen.<br />
If it works, then all that is left to do is add the relevent information to ''/etc/X11/xorg.conf''<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "stylus"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "stylus"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "tilt" "on" # add this if your tablet supports tilt<br />
Option "Threshold" "5" # the official linuxwacom howto advises this line<br />
EndSection<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "eraser"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "eraser"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "tilt" "on" # add this if your tablet supports tilt<br />
Option "Threshold" "5" # the official linuxwacom howto advises this line<br />
EndSection<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "cursor"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "cursor"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
EndSection<br />
Make sure that you also change the path (''"Device"'') to your mouse, as it will be ''/dev/input/mouse_udev'' now.<br />
<br />
Section "InputDevice"<br />
Identifier "Mouse1"<br />
Driver "mouse"<br />
Option "CorePointer"<br />
Option "Device" "/dev/input/mouse_udev"<br />
Option "SendCoreEvents" "true"<br />
Option "Protocol" "IMPS/2"<br />
Option "ZAxisMapping" "4 5"<br />
Option "Buttons" "5"<br />
EndSection<br />
Add this to the ''ServerLayout'' section<br />
<br />
InputDevice "cursor" "SendCoreEvents" <br />
InputDevice "stylus" "SendCoreEvents"<br />
InputDevice "eraser" "SendCoreEvents"<br />
And finally make sure to update the indentifier of your mouse in the ''ServerLayout'' section - as mine went from<br />
<br />
InputDevice "Mouse0" "CorePointer"<br />
To<br />
<br />
InputDevice "Mouse1" "CorePointer"<br />
<br />
==Xorg input hotplugging setup==<br />
<br />
To use a Wacom/WALTOP/N-Trig tablet with Xorg hotplugging create /etc/hal/fdi/policy/10-tablet.fdi with this code:<br />
<br />
<?xml version="1.0" encoding="UTF-8"?> &lt;!-- -*- SGML -*- --><br />
<br />
<deviceinfo version="0.2"><br />
<device><br />
<match key="info.capabilities" contains="input"><br />
<match key="info.product" contains="Wacom"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
<match key="info.product" contains="WALTOP"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
</match><br />
<!-- N-Trig Duosense Electromagnetic Digitizer --><br />
<match key="info.product" contains="HID 1b96:0001"><br />
<match key="info.parent" contains="if0"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
</match><br />
</device><br />
</deviceinfo><br />
<br />
Then kill your X server, restart HAL and start the X server again.<br />
<br />
= The GIMP =<br />
<br />
To enabled proper usage, and pressure sensitive painting in [http://www.gimp.org The GIMP], just go to ''"Preferences -> Input Devices -> Configure Extended Input Devices..."''. Now for each of your ''eraser'', ''stylus'', and ''cursor'' '''devices''', set the '''mode''' to ''Screen'', and remember to save.<br />
<br />
*Please take note that if present, the ''pad'' '''device''' should be kept disabled as I don't think The GIMP supports such things. Alternatively, to use such features of your tablet you should map them to keyboard commands with a program such as [http://hem.bredband.net/devel/wacom/ Wacom ExpressKeys].<br />
<br />
*You should also take note that the tool selected for the ''stylus'' is independent to that of the ''eraser''. This can actually be quite handy, as you can have the ''eraser'' set to be used as any tool you like.<br />
<br />
I recommend you checkout [http://linuxwacom.sourceforge.net/index.php/howto/gimp Linux Wacom Project HOWTO - 10.0 - Working With Gimp], and the ''Setting up GIMP'' section of [http://www.gimptalk.com/forum/topic.php?t=17992&start=1 GIMP Talk - Community - Install Guide: Getting Wacom Drawing Tablets To Work In Gimp].<br />
<br />
= Inkscape =<br />
<br />
As in The GIMP, to do the same simply got to ''"File -> Input Devices..."''. Now for each of your ''eraser'', ''stylus'', and ''cursor'' '''devices''', set the '''mode''' to ''Screen'', and remember to save.<br />
<br />
= Krita = <br />
<br />
To get your tablet working in Krita, simply go to ''"Settings -> configure Krita..."'' Click on ''Tablet'' and then like in Inkscape and GIMP set ''stylus'' and any others' mode to screen.<br />
<br />
== Bamboo ==<br />
<br />
'''Note''' Some users reported problems with linuxwacom 0.8.1-1 and Bamboo. Their Cursor jumped around when trying to use the stylus-tilt to avoid that problem simply use linuxwacom 0.8.0 (You can simply edit the pkgver in the PKGBUILD)<br />
<br />
If you use an older version of linuxwacom it could happen that you will not be able to use your pen with GIMP or Inkscape when configured as above since the stylus is firing a button2 event instead of a button1 event, same with the eraser. To correct this add (don't just copy and paste the whole section, just add the part about the buttons) these lines to the appropriate section of your xorg.conf.<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "stylus"<br />
Option "Button1" "1" #this line is important<br />
Option "Button2" "1" #this line is important<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "eraser"<br />
Option "Button1" "1" #this line is important<br />
Option "Button2" "1" #this line is important<br />
EndSection<br />
<br />
Be adviced that this way pressure sensitive painting in GIMP and Inkscape will work but the lower button of the pen will also fire a button1 event same as the stylus and eraser. You can not configure any other button for Button2, it got to be the same as Button1! There is no need to add these lines to the cursor section since the bamboo doesn't ship a mouse, still i advice you not to remove the cursor device as an input device, not even from the serverlayout section. That lead to an unstable xserver in my case.<br />
<br />
= Linuxwacom 0.8.1 bug =<br />
<br />
If you have trouble with the linuxwacom 0.8.1 beta/developer version driver as reported [http://bbs.archlinux.org/viewtopic.php?pid=421375 HERE], such as when you apply pressure the cursor freezes then you should try the linuxwacom 0.8.0 production version. Just download the existing PKGBUILD and other files from the [http://aur.archlinux.org/packages/linuxwacom/linuxwacom/ AUR] and change '''pkgver''' from 0.8.1 to '''0.8.0''', and the first '''md5sum''' from 4b78f1b66f6e9097a393cf1e3cdf87a3 to '''1d89b464392515492bb7b97c20e68d4e'''.<br />
<br />
= References =<br />
*[http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet]<br />
*[http://linuxwacom.sourceforge.net/index.php/howto/main Linux Wacom Project HOWTO]<br />
*[http://www.gimptalk.com/forum/topic.php?t=17992&start=1 GIMP Talk - Community - Install Guide: Getting Wacom Drawing Tablets To Work In Gimp]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Graphics_tablet&diff=68472Graphics tablet2009-05-09T13:11:36Z<p>Veox: /* Configure Xorg */</p>
<hr />
<div>[[Category:Input devices (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
= Introduction =<br />
<br />
Before we begin, I would like to point out that this guide is only for a ''USB'' based Wacom tablets. Futhermore, you can either setup a static ''Xorg'' configuration, meaning things may not work if later on you plug your Wacom tablet into a different ''USB'' port, or follow the dynamic instructions further down. Finally this guide is based on my experience of installing my ''Graphire4'' tablet, so others may like to add things specific to other Wacom tablets. I do welcome others to update this wiki to include a wider range of information.<br />
<br />
I'd also like to mention that this wiki is very much influenced by the very helpful [http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet], which I recommend anyone visit if they would like to learn about things that are not covered here.<br />
<br />
= Installing =<br />
<br />
== Install Linuxwacom ==<br />
<br />
Thanks to [http://linuxwacom.sourceforge.net The Linux Wacom Project], you only need to install the ''linuxwacom'' package, which contains everything needed to use a Wacom tablet on Linux. You can get this from the [[AUR]].<br />
<br />
== Configure Xorg ==<br />
<br />
''Note: static configuration is deprecated by the X.org project. Consider using HAL (see below) instead to get hotplugging and automatic configuration.''<br />
<br />
Again, I'd like to make note that I only cover how to setup a static ''Xorg'' configuration, meaning things may not work if later on you plug your Wacom tablet into a different USB port.<br />
<br />
InputDevice "cursor" "SendCoreEvents"<br />
InputDevice "stylus" "SendCoreEvents"<br />
InputDevice "eraser" "SendCoreEvents"<br />
Firstly, add these to the ''ServerLayout'' section of your ''Xorg'' config (/etc/X11/xorg.conf).<br />
<br />
cat /proc/bus/input/devices<br />
Now we need to determine the location of your tablet ''device''. Run the command above, and take note of the ''event'' number of the ''Handlers'' row. We will use this to set the correct device in our ''Xorg'' config below.<br />
<br />
I: Bus=0003 Vendor=056a Product=0016 Version=0403<br />
N: Name="Wacom Graphire4 6x8"<br />
P: Phys=<br />
S: Sysfs=/class/input/input7<br />
H: Handlers=mouse2 event7 ts2 <br />
B: EV=1f<br />
B: KEY=1c63 0 70011 0 0 0 0 0 0 0 0<br />
B: REL=100<br />
B: ABS=100 3000003<br />
B: MSC=1<br />
Here is an example of the output for my ''Graphire4'' tablet. From this, we can determine that my tablet device goes through ''/dev/input/event7''.<br />
<br />
Section "InputDevice"<br />
Identifier "stylus"<br />
Driver "wacom"<br />
Option "Type" "stylus"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "Threshold" "5"<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Identifier "eraser"<br />
Driver "wacom"<br />
Option "Type" "eraser"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "Threshold" "5"<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Identifier "cursor"<br />
Driver "wacom"<br />
Option "Type" "cursor"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
EndSection<br />
Now update your ''Xorg'' config (/etc/X11/xorg.conf) as above.<br />
<br />
To learn about each of the Wacom tablet ''Xorg'' options checkout the ''man pages'' found at [http://linuxwacom.sourceforge.net/index.php/howto/inputdev Linux Wacom Project HOWTO - 5.1 - Adding the InputDevices].<br />
<br />
I recommend you checkout [http://linuxwacom.sourceforge.net/index.php/howto/x11 Linux Wacom Project HOWTO - 5.0 - Configuring X11], I also recommend you checkout [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Xorg Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg].<br />
<br />
==== TwinView Setup ====<br />
<br />
If you are going to use two Monitors the apsect ratio while using the Tablet might feel unnatural. In order to fix this you need to add<br />
<br />
Option "TwinView" "horizontal"<br />
<br />
To all of your Wacom-InputDevice entries in the xorg.conf file.<br />
You may read more about that [http://ubuntuforums.org/showthread.php?t=640898 HERE]<br />
<br />
=== Graphire4 buttons ===<br />
<br />
InputDevice "pad" "SendCoreEvents"<br />
Add this to the ''ServerLayout'' section of your ''Xorg'' config (/etc/X11/xorg.conf).<br />
<br />
*Note, it was mentioned at [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Graphire4_buttons Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg - Graphire4 buttons] that there was information somewhere advising NOT to add "SendCoreEvents" to the line above, but it was also said that without this these buttons will not work.<br />
<br />
Section "InputDevice"<br />
Identifier "pad"<br />
Driver "wacom"<br />
Option "Type" "pad"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "ButtonsOnly" "on"<br />
EndSection<br />
Now update your ''Xorg'' config (/etc/X11/xorg.conf) as above.<br />
<br />
I recommend you checkout [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Graphire4_buttons Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg - Graphire4 buttons].<br />
<br />
=== Xorg crashes when logging in ===<br />
<br />
In case that happens to you you need to apply this patch to Xorg and recompile it (from http://sourceforge.net/tracker/index.php?func=detail&aid=1843335&group_id=69596&atid=525124):<br />
<br />
diff -ur xorg-server-1.4.orig/xkb/xkbLEDs.c xorg-server-1.4/xkb/xkbLEDs.c<br />
--- xorg-server-1.4.orig/xkb/xkbLEDs.c 2007-11-01 20:49:02.000000000<br />
+0100<br />
+++ xorg-server-1.4/xkb/xkbLEDs.c 2007-11-01 20:48:03.000000000<br />
+0100<br />
@@ -63,6 +63,9 @@<br />
<br />
sli= XkbFindSrvLedInfo(dev,XkbDfltXIClass,XkbDfltXIId,0);<br />
<br />
+ if (!sli)<br />
+ return 0;<br />
+<br />
if (state_changes&(XkbModifierStateMask|XkbGroupStateMask))<br />
update|= sli->usesEffective;<br />
if (state_changes&(XkbModifierBaseMask|XkbGroupBaseMask))<br />
<br />
Be advised that this patch could lead to unexpected behavior. It's no official patch and only use it at your own risk.<br />
<br />
<br />
=== Tablet devices still do not appear ===<br />
<br />
Start ''Xorg'' with tablet connected. Then look at logs (/var/log/Xorg.0.log) and search for those errors:<br />
<br />
Error opening /dev/input/wacom : Success<br />
(EE) xf86OpenSerial: Cannot open device /dev/input/wacom<br />
No such file or directory.<br />
<br />
This error will show even when device exists.<br />
<br />
Second error is<br />
<br />
usbDetect: can not ioctl version<br />
Wacom xf86WcmWrite error : Invalid argument<br />
<br />
If there are those errors, check if your wacom device is /dev/input/ts3 or another ts device (or symlink to this device). If it is, then device is handled by Compaq touchscreen emulation. This is ''tsdev'' module.<br />
Just unload the module<br />
<br />
modprobe -r tsdev<br />
<br />
and add this module to blacklist in /etc/rc.conf<br />
<br />
==Dynamic (udev) Xorg setup==<br />
Again thanks to [http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet] for the information! This was done with a ''Volito2'', and so reflects the experiences with that tablet, but it should work for any tablet supported by the linuxwacom project.<br />
<br />
'''Note''': The linuxwacom package from AUR already includes a udev-rules-file, so you might skip this part and move on to the xorg.conf configuration if you're using the linuxwacom package from AUR<br />
<br />
Install ''udev'' from the repositories.<br />
Run the ''lsusb'' command. It should return something like this<br />
<br />
Bus 002 Device 007: ID 056a:0062 Wacom Co., Ltd<br />
Bus 002 Device 006: ID 03eb:0902 Atmel Corp.<br />
Bus 002 Device 005: ID 0bc2:0503 Seagate RSS LLC<br />
Bus 002 Device 004: ID 05e3:0660 Genesys Logic, Inc. USB 2.0 Hub<br />
Bus 002 Device 001: ID 1d6b:0002<br />
Bus 003 Device 001: ID 1d6b:0001<br />
Bus 001 Device 003: ID 06a3:8000 Saitek PLC<br />
Bus 001 Device 002: ID 045e:00d1 Microsoft Corp.<br />
Bus 001 Device 001: ID 1d6b:0001<br />
You can see from here my tablet among other devices. We are interested in the tablet and mouse - unless there is no mouse attached to the said computer of course.<br />
Next make the file ''10-local.rules'' in ''/etc/udev/rules.d''. You need to add these two lines<br />
<br />
KERNEL=="event*", SYSFS{idVendor}=="056a", NAME="input/%k", SYMLINK="input/wacom"<br />
KERNEL=="mouse*", SYSFS{idProduct}=="045e", NAME="input/%k", SYMLINK="input/mouse_udev"<br />
Of course you need to change '056a' and '045e' to what lsusb returns for you - I used the VendorID for my tablet and the ProductID for my mouse.<br />
Save the file and start udev using the command ''/etc/start_udev''<br />
Check to make sure that it has appeared in ''/dev/input''.<br />
<br />
bash-3.2# cd /dev/input<br />
bash-3.2# ls<br />
by-id event0 event2 event4 event6 event8 mouse0 mouse2 wacom<br />
by-path event1 event3 event5 event7 mice mouse1 mouse_udev<br />
You can even check that the device works by<br />
<br />
# cat wacom<br />
It should make lots of odd characters appear onscreen.<br />
If it works, then all that is left to do is add the relevent information to ''/etc/X11/xorg.conf''<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "stylus"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "stylus"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "tilt" "on" # add this if your tablet supports tilt<br />
Option "Threshold" "5" # the official linuxwacom howto advises this line<br />
EndSection<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "eraser"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "eraser"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "tilt" "on" # add this if your tablet supports tilt<br />
Option "Threshold" "5" # the official linuxwacom howto advises this line<br />
EndSection<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "cursor"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "cursor"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
EndSection<br />
Make sure that you also change the path (''"Device"'') to your mouse, as it will be ''/dev/input/mouse_udev'' now.<br />
<br />
Section "InputDevice"<br />
Identifier "Mouse1"<br />
Driver "mouse"<br />
Option "CorePointer"<br />
Option "Device" "/dev/input/mouse_udev"<br />
Option "SendCoreEvents" "true"<br />
Option "Protocol" "IMPS/2"<br />
Option "ZAxisMapping" "4 5"<br />
Option "Buttons" "5"<br />
EndSection<br />
Add this to the ''ServerLayout'' section<br />
<br />
InputDevice "cursor" "SendCoreEvents" <br />
InputDevice "stylus" "SendCoreEvents"<br />
InputDevice "eraser" "SendCoreEvents"<br />
And finally make sure to update the indentifier of your mouse in the ''ServerLayout'' section - as mine went from<br />
<br />
InputDevice "Mouse0" "CorePointer"<br />
To<br />
<br />
InputDevice "Mouse1" "CorePointer"<br />
<br />
==Xorg input hotplugging setup==<br />
<br />
For using a wacom/WALTOP/N-Trig tablet with Xorg hotplugging.<br />
<br />
Create /etc/hal/fdi/policy/custom_wacom.fdi with this code and restart X and HAL:<br />
<br />
<?xml version="1.0" encoding="UTF-8"?> &lt;!-- -*- SGML -*- --><br />
<br />
<deviceinfo version="0.2"><br />
<device><br />
<match key="info.capabilities" contains="input"><br />
<match key="info.product" contains="Wacom"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
<match key="info.product" contains="WALTOP"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
</match><br />
<!-- N-Trig Duosense Electromagnetic Digitizer --><br />
<match key="info.product" contains="HID 1b96:0001"><br />
<match key="info.parent" contains="if0"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
</match><br />
</device><br />
</deviceinfo><br />
<br />
= The GIMP =<br />
<br />
To enabled proper usage, and pressure sensitive painting in [http://www.gimp.org The GIMP], just go to ''"Preferences -> Input Devices -> Configure Extended Input Devices..."''. Now for each of your ''eraser'', ''stylus'', and ''cursor'' '''devices''', set the '''mode''' to ''Screen'', and remember to save.<br />
<br />
*Please take note that if present, the ''pad'' '''device''' should be kept disabled as I don't think The GIMP supports such things. Alternatively, to use such features of your tablet you should map them to keyboard commands with a program such as [http://hem.bredband.net/devel/wacom/ Wacom ExpressKeys].<br />
<br />
*You should also take note that the tool selected for the ''stylus'' is independent to that of the ''eraser''. This can actually be quite handy, as you can have the ''eraser'' set to be used as any tool you like.<br />
<br />
I recommend you checkout [http://linuxwacom.sourceforge.net/index.php/howto/gimp Linux Wacom Project HOWTO - 10.0 - Working With Gimp], and the ''Setting up GIMP'' section of [http://www.gimptalk.com/forum/topic.php?t=17992&start=1 GIMP Talk - Community - Install Guide: Getting Wacom Drawing Tablets To Work In Gimp].<br />
<br />
= Inkscape =<br />
<br />
As in The GIMP, to do the same simply got to ''"File -> Input Devices..."''. Now for each of your ''eraser'', ''stylus'', and ''cursor'' '''devices''', set the '''mode''' to ''Screen'', and remember to save.<br />
<br />
= Krita = <br />
<br />
To get your tablet working in Krita, simply go to ''"Settings -> configure Krita..."'' Click on ''Tablet'' and then like in Inkscape and GIMP set ''stylus'' and any others' mode to screen.<br />
<br />
== Bamboo ==<br />
<br />
'''Note''' Some users reported problems with linuxwacom 0.8.1-1 and Bamboo. Their Cursor jumped around when trying to use the stylus-tilt to avoid that problem simply use linuxwacom 0.8.0 (You can simply edit the pkgver in the PKGBUILD)<br />
<br />
If you use an older version of linuxwacom it could happen that you will not be able to use your pen with GIMP or Inkscape when configured as above since the stylus is firing a button2 event instead of a button1 event, same with the eraser. To correct this add (don't just copy and paste the whole section, just add the part about the buttons) these lines to the appropriate section of your xorg.conf.<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "stylus"<br />
Option "Button1" "1" #this line is important<br />
Option "Button2" "1" #this line is important<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "eraser"<br />
Option "Button1" "1" #this line is important<br />
Option "Button2" "1" #this line is important<br />
EndSection<br />
<br />
Be adviced that this way pressure sensitive painting in GIMP and Inkscape will work but the lower button of the pen will also fire a button1 event same as the stylus and eraser. You can not configure any other button for Button2, it got to be the same as Button1! There is no need to add these lines to the cursor section since the bamboo doesn't ship a mouse, still i advice you not to remove the cursor device as an input device, not even from the serverlayout section. That lead to an unstable xserver in my case.<br />
<br />
= Linuxwacom 0.8.1 bug =<br />
<br />
If you have trouble with the linuxwacom 0.8.1 beta/developer version driver as reported [http://bbs.archlinux.org/viewtopic.php?pid=421375 HERE], such as when you apply pressure the cursor freezes then you should try the linuxwacom 0.8.0 production version. Just download the existing PKGBUILD and other files from the [http://aur.archlinux.org/packages/linuxwacom/linuxwacom/ AUR] and change '''pkgver''' from 0.8.1 to '''0.8.0''', and the first '''md5sum''' from 4b78f1b66f6e9097a393cf1e3cdf87a3 to '''1d89b464392515492bb7b97c20e68d4e'''.<br />
<br />
= References =<br />
*[http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet]<br />
*[http://linuxwacom.sourceforge.net/index.php/howto/main Linux Wacom Project HOWTO]<br />
*[http://www.gimptalk.com/forum/topic.php?t=17992&start=1 GIMP Talk - Community - Install Guide: Getting Wacom Drawing Tablets To Work In Gimp]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Graphics_tablet&diff=68471Graphics tablet2009-05-09T13:10:47Z<p>Veox: /* Configure Xorg */</p>
<hr />
<div>[[Category:Input devices (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
= Introduction =<br />
<br />
Before we begin, I would like to point out that this guide is only for a ''USB'' based Wacom tablets. Futhermore, you can either setup a static ''Xorg'' configuration, meaning things may not work if later on you plug your Wacom tablet into a different ''USB'' port, or follow the dynamic instructions further down. Finally this guide is based on my experience of installing my ''Graphire4'' tablet, so others may like to add things specific to other Wacom tablets. I do welcome others to update this wiki to include a wider range of information.<br />
<br />
I'd also like to mention that this wiki is very much influenced by the very helpful [http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet], which I recommend anyone visit if they would like to learn about things that are not covered here.<br />
<br />
= Installing =<br />
<br />
== Install Linuxwacom ==<br />
<br />
Thanks to [http://linuxwacom.sourceforge.net The Linux Wacom Project], you only need to install the ''linuxwacom'' package, which contains everything needed to use a Wacom tablet on Linux. You can get this from the [[AUR]].<br />
<br />
== Configure Xorg ==<br />
<br />
''Note: static configuration is deprecated by the X.org project. Consider using HAL (see below) instead to get autodetection and automatic configuration.''<br />
<br />
Again, I'd like to make note that I only cover how to setup a static ''Xorg'' configuration, meaning things may not work if later on you plug your Wacom tablet into a different USB port.<br />
<br />
InputDevice "cursor" "SendCoreEvents"<br />
InputDevice "stylus" "SendCoreEvents"<br />
InputDevice "eraser" "SendCoreEvents"<br />
Firstly, add these to the ''ServerLayout'' section of your ''Xorg'' config (/etc/X11/xorg.conf).<br />
<br />
cat /proc/bus/input/devices<br />
Now we need to determine the location of your tablet ''device''. Run the command above, and take note of the ''event'' number of the ''Handlers'' row. We will use this to set the correct device in our ''Xorg'' config below.<br />
<br />
I: Bus=0003 Vendor=056a Product=0016 Version=0403<br />
N: Name="Wacom Graphire4 6x8"<br />
P: Phys=<br />
S: Sysfs=/class/input/input7<br />
H: Handlers=mouse2 event7 ts2 <br />
B: EV=1f<br />
B: KEY=1c63 0 70011 0 0 0 0 0 0 0 0<br />
B: REL=100<br />
B: ABS=100 3000003<br />
B: MSC=1<br />
Here is an example of the output for my ''Graphire4'' tablet. From this, we can determine that my tablet device goes through ''/dev/input/event7''.<br />
<br />
Section "InputDevice"<br />
Identifier "stylus"<br />
Driver "wacom"<br />
Option "Type" "stylus"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "Threshold" "5"<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Identifier "eraser"<br />
Driver "wacom"<br />
Option "Type" "eraser"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "Threshold" "5"<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Identifier "cursor"<br />
Driver "wacom"<br />
Option "Type" "cursor"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "Mode" "Absolute"<br />
Option "Vendor" "WACOM"<br />
EndSection<br />
Now update your ''Xorg'' config (/etc/X11/xorg.conf) as above.<br />
<br />
To learn about each of the Wacom tablet ''Xorg'' options checkout the ''man pages'' found at [http://linuxwacom.sourceforge.net/index.php/howto/inputdev Linux Wacom Project HOWTO - 5.1 - Adding the InputDevices].<br />
<br />
I recommend you checkout [http://linuxwacom.sourceforge.net/index.php/howto/x11 Linux Wacom Project HOWTO - 5.0 - Configuring X11], I also recommend you checkout [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Xorg Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg].<br />
<br />
==== TwinView Setup ====<br />
<br />
If you are going to use two Monitors the apsect ratio while using the Tablet might feel unnatural. In order to fix this you need to add<br />
<br />
Option "TwinView" "horizontal"<br />
<br />
To all of your Wacom-InputDevice entries in the xorg.conf file.<br />
You may read more about that [http://ubuntuforums.org/showthread.php?t=640898 HERE]<br />
<br />
=== Graphire4 buttons ===<br />
<br />
InputDevice "pad" "SendCoreEvents"<br />
Add this to the ''ServerLayout'' section of your ''Xorg'' config (/etc/X11/xorg.conf).<br />
<br />
*Note, it was mentioned at [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Graphire4_buttons Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg - Graphire4 buttons] that there was information somewhere advising NOT to add "SendCoreEvents" to the line above, but it was also said that without this these buttons will not work.<br />
<br />
Section "InputDevice"<br />
Identifier "pad"<br />
Driver "wacom"<br />
Option "Type" "pad"<br />
Option "Device" "/dev/input/event7"<br />
Option "USB" "on"<br />
Option "ButtonsOnly" "on"<br />
EndSection<br />
Now update your ''Xorg'' config (/etc/X11/xorg.conf) as above.<br />
<br />
I recommend you checkout [http://gentoo-wiki.com/HOWTO_Wacom_Tablet#Graphire4_buttons Gentoo Linux Wiki - HOWTO Wacom Tablet - Installing - Xorg - Graphire4 buttons].<br />
<br />
=== Xorg crashes when logging in ===<br />
<br />
In case that happens to you you need to apply this patch to Xorg and recompile it (from http://sourceforge.net/tracker/index.php?func=detail&aid=1843335&group_id=69596&atid=525124):<br />
<br />
diff -ur xorg-server-1.4.orig/xkb/xkbLEDs.c xorg-server-1.4/xkb/xkbLEDs.c<br />
--- xorg-server-1.4.orig/xkb/xkbLEDs.c 2007-11-01 20:49:02.000000000<br />
+0100<br />
+++ xorg-server-1.4/xkb/xkbLEDs.c 2007-11-01 20:48:03.000000000<br />
+0100<br />
@@ -63,6 +63,9 @@<br />
<br />
sli= XkbFindSrvLedInfo(dev,XkbDfltXIClass,XkbDfltXIId,0);<br />
<br />
+ if (!sli)<br />
+ return 0;<br />
+<br />
if (state_changes&(XkbModifierStateMask|XkbGroupStateMask))<br />
update|= sli->usesEffective;<br />
if (state_changes&(XkbModifierBaseMask|XkbGroupBaseMask))<br />
<br />
Be advised that this patch could lead to unexpected behavior. It's no official patch and only use it at your own risk.<br />
<br />
<br />
=== Tablet devices still do not appear ===<br />
<br />
Start ''Xorg'' with tablet connected. Then look at logs (/var/log/Xorg.0.log) and search for those errors:<br />
<br />
Error opening /dev/input/wacom : Success<br />
(EE) xf86OpenSerial: Cannot open device /dev/input/wacom<br />
No such file or directory.<br />
<br />
This error will show even when device exists.<br />
<br />
Second error is<br />
<br />
usbDetect: can not ioctl version<br />
Wacom xf86WcmWrite error : Invalid argument<br />
<br />
If there are those errors, check if your wacom device is /dev/input/ts3 or another ts device (or symlink to this device). If it is, then device is handled by Compaq touchscreen emulation. This is ''tsdev'' module.<br />
Just unload the module<br />
<br />
modprobe -r tsdev<br />
<br />
and add this module to blacklist in /etc/rc.conf<br />
<br />
==Dynamic (udev) Xorg setup==<br />
Again thanks to [http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet] for the information! This was done with a ''Volito2'', and so reflects the experiences with that tablet, but it should work for any tablet supported by the linuxwacom project.<br />
<br />
'''Note''': The linuxwacom package from AUR already includes a udev-rules-file, so you might skip this part and move on to the xorg.conf configuration if you're using the linuxwacom package from AUR<br />
<br />
Install ''udev'' from the repositories.<br />
Run the ''lsusb'' command. It should return something like this<br />
<br />
Bus 002 Device 007: ID 056a:0062 Wacom Co., Ltd<br />
Bus 002 Device 006: ID 03eb:0902 Atmel Corp.<br />
Bus 002 Device 005: ID 0bc2:0503 Seagate RSS LLC<br />
Bus 002 Device 004: ID 05e3:0660 Genesys Logic, Inc. USB 2.0 Hub<br />
Bus 002 Device 001: ID 1d6b:0002<br />
Bus 003 Device 001: ID 1d6b:0001<br />
Bus 001 Device 003: ID 06a3:8000 Saitek PLC<br />
Bus 001 Device 002: ID 045e:00d1 Microsoft Corp.<br />
Bus 001 Device 001: ID 1d6b:0001<br />
You can see from here my tablet among other devices. We are interested in the tablet and mouse - unless there is no mouse attached to the said computer of course.<br />
Next make the file ''10-local.rules'' in ''/etc/udev/rules.d''. You need to add these two lines<br />
<br />
KERNEL=="event*", SYSFS{idVendor}=="056a", NAME="input/%k", SYMLINK="input/wacom"<br />
KERNEL=="mouse*", SYSFS{idProduct}=="045e", NAME="input/%k", SYMLINK="input/mouse_udev"<br />
Of course you need to change '056a' and '045e' to what lsusb returns for you - I used the VendorID for my tablet and the ProductID for my mouse.<br />
Save the file and start udev using the command ''/etc/start_udev''<br />
Check to make sure that it has appeared in ''/dev/input''.<br />
<br />
bash-3.2# cd /dev/input<br />
bash-3.2# ls<br />
by-id event0 event2 event4 event6 event8 mouse0 mouse2 wacom<br />
by-path event1 event3 event5 event7 mice mouse1 mouse_udev<br />
You can even check that the device works by<br />
<br />
# cat wacom<br />
It should make lots of odd characters appear onscreen.<br />
If it works, then all that is left to do is add the relevent information to ''/etc/X11/xorg.conf''<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "stylus"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "stylus"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "tilt" "on" # add this if your tablet supports tilt<br />
Option "Threshold" "5" # the official linuxwacom howto advises this line<br />
EndSection<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "eraser"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "eraser"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
Option "tilt" "on" # add this if your tablet supports tilt<br />
Option "Threshold" "5" # the official linuxwacom howto advises this line<br />
EndSection<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "cursor"<br />
Option "Device" "/dev/input/wacom"<br />
Option "Type" "cursor"<br />
Option "USB" "on" # USB ONLY<br />
Option "Mode" "Relative" # other option: "Absolute"<br />
Option "Vendor" "WACOM"<br />
EndSection<br />
Make sure that you also change the path (''"Device"'') to your mouse, as it will be ''/dev/input/mouse_udev'' now.<br />
<br />
Section "InputDevice"<br />
Identifier "Mouse1"<br />
Driver "mouse"<br />
Option "CorePointer"<br />
Option "Device" "/dev/input/mouse_udev"<br />
Option "SendCoreEvents" "true"<br />
Option "Protocol" "IMPS/2"<br />
Option "ZAxisMapping" "4 5"<br />
Option "Buttons" "5"<br />
EndSection<br />
Add this to the ''ServerLayout'' section<br />
<br />
InputDevice "cursor" "SendCoreEvents" <br />
InputDevice "stylus" "SendCoreEvents"<br />
InputDevice "eraser" "SendCoreEvents"<br />
And finally make sure to update the indentifier of your mouse in the ''ServerLayout'' section - as mine went from<br />
<br />
InputDevice "Mouse0" "CorePointer"<br />
To<br />
<br />
InputDevice "Mouse1" "CorePointer"<br />
<br />
==Xorg input hotplugging setup==<br />
<br />
For using a wacom/WALTOP/N-Trig tablet with Xorg hotplugging.<br />
<br />
Create /etc/hal/fdi/policy/custom_wacom.fdi with this code and restart X and HAL:<br />
<br />
<?xml version="1.0" encoding="UTF-8"?> &lt;!-- -*- SGML -*- --><br />
<br />
<deviceinfo version="0.2"><br />
<device><br />
<match key="info.capabilities" contains="input"><br />
<match key="info.product" contains="Wacom"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
<match key="info.product" contains="WALTOP"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
</match><br />
<!-- N-Trig Duosense Electromagnetic Digitizer --><br />
<match key="info.product" contains="HID 1b96:0001"><br />
<match key="info.parent" contains="if0"><br />
<merge key="input.x11_driver" type="string">wacom</merge><br />
<merge key="input.x11_options.Type" type="string">stylus</merge><br />
</match><br />
</match><br />
</device><br />
</deviceinfo><br />
<br />
= The GIMP =<br />
<br />
To enabled proper usage, and pressure sensitive painting in [http://www.gimp.org The GIMP], just go to ''"Preferences -> Input Devices -> Configure Extended Input Devices..."''. Now for each of your ''eraser'', ''stylus'', and ''cursor'' '''devices''', set the '''mode''' to ''Screen'', and remember to save.<br />
<br />
*Please take note that if present, the ''pad'' '''device''' should be kept disabled as I don't think The GIMP supports such things. Alternatively, to use such features of your tablet you should map them to keyboard commands with a program such as [http://hem.bredband.net/devel/wacom/ Wacom ExpressKeys].<br />
<br />
*You should also take note that the tool selected for the ''stylus'' is independent to that of the ''eraser''. This can actually be quite handy, as you can have the ''eraser'' set to be used as any tool you like.<br />
<br />
I recommend you checkout [http://linuxwacom.sourceforge.net/index.php/howto/gimp Linux Wacom Project HOWTO - 10.0 - Working With Gimp], and the ''Setting up GIMP'' section of [http://www.gimptalk.com/forum/topic.php?t=17992&start=1 GIMP Talk - Community - Install Guide: Getting Wacom Drawing Tablets To Work In Gimp].<br />
<br />
= Inkscape =<br />
<br />
As in The GIMP, to do the same simply got to ''"File -> Input Devices..."''. Now for each of your ''eraser'', ''stylus'', and ''cursor'' '''devices''', set the '''mode''' to ''Screen'', and remember to save.<br />
<br />
= Krita = <br />
<br />
To get your tablet working in Krita, simply go to ''"Settings -> configure Krita..."'' Click on ''Tablet'' and then like in Inkscape and GIMP set ''stylus'' and any others' mode to screen.<br />
<br />
== Bamboo ==<br />
<br />
'''Note''' Some users reported problems with linuxwacom 0.8.1-1 and Bamboo. Their Cursor jumped around when trying to use the stylus-tilt to avoid that problem simply use linuxwacom 0.8.0 (You can simply edit the pkgver in the PKGBUILD)<br />
<br />
If you use an older version of linuxwacom it could happen that you will not be able to use your pen with GIMP or Inkscape when configured as above since the stylus is firing a button2 event instead of a button1 event, same with the eraser. To correct this add (don't just copy and paste the whole section, just add the part about the buttons) these lines to the appropriate section of your xorg.conf.<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "stylus"<br />
Option "Button1" "1" #this line is important<br />
Option "Button2" "1" #this line is important<br />
EndSection<br />
<br />
Section "InputDevice"<br />
Driver "wacom"<br />
Identifier "eraser"<br />
Option "Button1" "1" #this line is important<br />
Option "Button2" "1" #this line is important<br />
EndSection<br />
<br />
Be adviced that this way pressure sensitive painting in GIMP and Inkscape will work but the lower button of the pen will also fire a button1 event same as the stylus and eraser. You can not configure any other button for Button2, it got to be the same as Button1! There is no need to add these lines to the cursor section since the bamboo doesn't ship a mouse, still i advice you not to remove the cursor device as an input device, not even from the serverlayout section. That lead to an unstable xserver in my case.<br />
<br />
= Linuxwacom 0.8.1 bug =<br />
<br />
If you have trouble with the linuxwacom 0.8.1 beta/developer version driver as reported [http://bbs.archlinux.org/viewtopic.php?pid=421375 HERE], such as when you apply pressure the cursor freezes then you should try the linuxwacom 0.8.0 production version. Just download the existing PKGBUILD and other files from the [http://aur.archlinux.org/packages/linuxwacom/linuxwacom/ AUR] and change '''pkgver''' from 0.8.1 to '''0.8.0''', and the first '''md5sum''' from 4b78f1b66f6e9097a393cf1e3cdf87a3 to '''1d89b464392515492bb7b97c20e68d4e'''.<br />
<br />
= References =<br />
*[http://gentoo-wiki.com/HOWTO_Wacom_Tablet Gentoo Linux Wiki - HOWTO Wacom Tablet]<br />
*[http://linuxwacom.sourceforge.net/index.php/howto/main Linux Wacom Project HOWTO]<br />
*[http://www.gimptalk.com/forum/topic.php?t=17992&start=1 GIMP Talk - Community - Install Guide: Getting Wacom Drawing Tablets To Work In Gimp]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Lisp_package_guidelines&diff=65956Lisp package guidelines2009-03-29T19:07:18Z<p>Veox: /* Directory Structure */ Add naming convention.</p>
<hr />
<div>[[Category:Package development (English)]]<br />
[[Category:Guidelines (English)]]<br />
== Background ==<br />
<br />
At the moment, there are relatively few lisp packages available in the<br />
Arch repositories. This means that at some point or another, more will<br />
likely appear. It is useful, therefore, to figure out now, while there<br />
are few packages, how they should be packaged. Therefore, this page<br />
stands as a proposed packaging guideline for lisp packages. Keep in<br />
mind, however, that this is a work in progress; if you disagree with<br />
some of the ideas suggested here, feel free to edit the page and<br />
propose something better.<br />
<br />
== Directory Structure and Naming ==<br />
<br />
There is at least one package in the base repository (libgpg-error)<br />
that includes lisp files, which are placed in<br />
'''/usr/share/common-lisp/source/gpg-error'''. In keeping with this,<br />
other lisp packages should also place their files in<br />
'''/usr/share/common-lisp/source/'''. Each package should have its own<br />
directory, so as not to clutter up this base directory.<br />
<br />
The package directory<br />
should be the name of the lisp package, not what it's called in the<br />
Arch repository (or AUR). This applies even to single-file packages.<br />
<br />
For example, a Lisp package called '''cl-ppcre''' should be called<br />
'''cl-ppcre''' in AUR and reside in '''/usr/share/common-lisp/source/cl-ppcre'''.<br />
A Lisp package called '''alexandria''' should be called '''cl-alexandria'''<br />
in AUR and reside in '''/usr/share/common-lisp/source/alexandria'''.<br />
<br />
== ASDF ==<br />
<br />
Try to avoid the usage of ASDF-Install as a means of installing these<br />
system-wide packages.<br />
<br />
ASDF itself may be necessary or helpful as a means of compiling and/or<br />
loading packages. In that case, it is suggested that the directory<br />
used for the central registry (the location of all of the symlinks <br />
to *.asd) be '''/usr/share/common-lisp/systems/'''.<br />
<br />
However, I have observed problems with doing the compilation with asdf<br />
as a part of the package compilation process. However, it does work<br />
during an install, through use of a package.install file. Such a file<br />
might look like this:<br />
<br />
# cl-ppcre.install<br />
# arg 1: the new package version<br />
post_install() {<br />
echo "---> Compiling lisp files <---"<br />
<br />
clisp --silent -norc -x \<br />
"(load #p\"/usr/share/common-lisp/source/asdf/asdf\") \<br />
(pushnew #p\"/usr/share/common-lisp/systems/\" asdf:*central-registry* :test #'equal) \<br />
(asdf:operate 'asdf:compile-op 'cl-ppcre)"<br />
<br />
echo "---> Done compiling lisp files <---"<br />
<br />
cat << EOM<br />
<br />
To load this library, load asdf and then place the following lines<br />
in your ~/.clisprc.lisp file:<br />
<br />
(push #p"/usr/share/common-lisp/systems/" asdf:*central-registry*)<br />
(asdf:operate 'asdf:load-op 'cl-ppcre)<br />
EOM<br />
}<br />
<br />
post_upgrade() {<br />
post_install $1<br />
}<br />
<br />
pre_remove() {<br />
rm /usr/share/common-lisp/cl-ppcre/{*.fas,*.lib}<br />
}<br />
<br />
op=$1<br />
shift<br />
<br />
$op $*<br />
<br />
Of course, for this example to work, there needs to be a symlink to<br />
package.asd in the asdf system directory. During package compilation,<br />
a stanza such as this will do the trick...<br />
<br />
pushd ${_lispdir}/systems<br />
ln -s ../source/cl-ppcre/cl-ppcre.asd .<br />
ln -s ../source/cl-ppcre/cl-ppcre-test.asd .<br />
popd<br />
<br />
...where ''$_lispdir'' is '''${startdir}/pkg/usr/share/common-lisp'''.<br />
By linking to a relative, rather than an absolute, path, it's possible<br />
to guarantee that the link will not break post-install.<br />
<br />
== Lisp-specific packaging ==<br />
<br />
When possible, do not make packages specific to a single lisp<br />
implementation; try to be as cross-platform as the package itself will<br />
allow. If, however, the package is specifically designed for a single<br />
lisp implementation (i.e., the developers haven't gotten around to<br />
adding support for others yet, or the package's purpose is<br />
specifically to provide a capability that is built in to another lisp<br />
implementation), it is appropriate to make your Arch package<br />
lisp-specific.<br />
<br />
To avoid making packages implementation-specific, ideally all<br />
implementation packages (SBCL, cmucl, clisp) would be built with the<br />
PKGBUILD field '''common-lisp'''. However, that's not the case (and<br />
that would likely cause problems for people who prefer to have<br />
multiple lisps at their fingertips). In the meantime, you could (a)<br />
not make your package depend on *any* lisp and include a statement in<br />
the package.install file telling folks to make sure they have a lisp<br />
installed (not ideal), or (b) Take direction from the ''sbcl''<br />
PKGBUILD and include a conditional statement to figure out which lisp<br />
is needed (which is hackish and, again, far from ideal). Other ideas<br />
are welcome.<br />
<br />
Also note that if ASDF is needed to install/compile/load the package,<br />
things could potentially get awkward as far as dependencies go, since<br />
SBCL comes with asdf installed, clisp does not but there is an AUR<br />
package, and CMUCL may or may not have it (the author of this doc.<br />
knows next to nothing about CMUCL; sorry).<br />
<br />
People currently maintaining lisp-specific packages that don't need to<br />
be lisp-specific should consider doing at least one of the following:<br />
<br />
* Editing their PKGBUILD(s) to be cross-platform, provided someone else is not already maintaining the same package for a different lisp.<br />
<br />
* Offering to take over maintenance or help with maintenance of the same package for a different lisp, and then combining them into a single package.<br />
<br />
* Offering up their package to the maintainer of a different lisp's version of the same package, so as to allow that person to combine them into a single package.<br />
<br />
(Note that joyfulgirl, the author of this doc., currently maintains<br />
clisp versions of cl-ppcre and of stumpwm; she is open to either<br />
giving up the packages to the maintainers of the SBCL versions or to<br />
maintain the new, cross-platform versions herself if the SBCL-version<br />
maintainers don't want to).<br />
<br />
== Things you, the reader, can do ==<br />
<br />
* Maintain lisp packages following these guidelines<br />
* Update and fix problems with these guidelines<br />
* Keep up with what's changed here<br />
* Provide (polite) thoughts, feedback, and suggestions both on this document and to people's work.<br />
* Translate this page and future updates to this page.</div>Veoxhttps://wiki.archlinux.org/index.php?title=QEMU&diff=65752QEMU2009-03-26T10:45:23Z<p>Veox: /* Preparing the installation media */ for floppies</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
Qemu is a fast cpu emulator using dynamic translation to achieve good emulation speed. Unlike VMware and Win4Lin, it emulates the cpu instead of only virtualizing the computer. This means that it is considerably slower, but on the other hand it is much more portable, stable and secure. Plus it is open source. It is great solution if you want to run some simple Windows applications like MS Office on a fast PC and you want to ensure maximum compatibility. It is also a great tool for operating system development. This HOWTO describes how to set up the emulator and install Windows 9x/2000 under it.<br />
<br />
Qemu has the homepage at http://bellard.org/qemu/.<br />
<br />
<br />
<br />
=== Choosing Windows version===<br />
Qemu can run any version of Windows. However, 98, Me and XP will run at quite a low speed. You should choose either Windows 95 or Windows 2000. Surprisingly, 2000 seems to run faster than 98. The fastest one is 95, which can from time to time make you forget that you are running an emulator :)<br />
<br />
If you own both Win95 and win98/winme, then 98lite (from http://www.litepc.com) might be worth trying. It decouples Internet Explorer from operating system and replaces it with original Windows 95 Explorer. It also enables you to do a minimal Windows installation, without all the bloat you normally cannot disable. This might be the best option, because you get the smallest, fastest and most stable Windows this way.<br />
<br />
=== Installing QEMU ===<br />
<br />
QEMU is available as a package in the [extra] repository. To install it, add the extra repo to /etc/pacman.conf, and do<br />
<pre><br />
pacman -Sy qemu<br />
</pre><br />
You can also install the optional kernel accelerator module KQEMU, which can also be found in the [extra] repository.<br />
<pre><br />
pacman -Sy kqemu<br />
</pre><br />
<br />
=== Creating the hard disk image===<br />
To run qemu you will probably need a hard disk image. This is a file which stores the contents of the emulated hard disk.<br />
<br />
Use the command:<br />
<pre><br />
qemu-img create -f qcow2 win.qcow 4G<br />
</pre><br />
to create the image file named "win.qcow". The "4G" parameter specifies the size of the disk - in this case 4 GB. You can use suffix M for megabytes (for example "256M"). You shouldn't worry too much about the size of the disk - the qcow2 format compresses the image so that the empty space doesn't add up to the size of the file.<br />
<br />
=== Preparing the installation media===<br />
The installation CD-ROM/floppy shouldn't be mounted, because Qemu accesses the media directly. It is a good idea to dump CD-ROM and/or floppy to a file, because it both improves performance and doesn't require you to have direct access to the devices (that is, you can run Qemu as a regular user). For example, if the CD-ROM device node is named "/dev/cdrom", you can dump it to a file with the command:<br />
<pre><br />
dd if=/dev/cdrom of=win98icd.iso<br />
</pre><br />
<br />
Do the same for floppies:<br />
<br />
<pre><br />
dd if=/dev/fd of=win95d1.img<br />
...<br />
</pre><br />
<br />
When you need to replace floppies within qemu, just copy the contents of one floppy over another. For this reason, it is useful to create a special file that will hold the current floppy:<br />
<br />
<pre><br />
touch floppy.img<br />
</pre><br />
<br />
=== Installing the operating system===<br />
<br />
This is the first time you will need to start the emulator.<br />
One thing to keep in mind: when you click inside qemu window, the mouse pointer is grabbed. To release it press Ctrl+Alt.<br />
<br />
If you need to use a bootable floppy, run Qemu with:<br />
<pre><br />
qemu -cdrom [[cdrom''image]] -fda [[floppy''image]] -boot a [[hd_image]]<br />
</pre><br />
or if you are on a x86_64 system (will avoid many problems afterwards):<br />
<pre><br />
qemu-system-x86_64 -cdrom [[cdrom''image]] -fda [[floppy''image]] -boot a [[hd_image]]<br />
</pre><br />
<br />
If your CD-ROM is bootable or you are using iso files, run Qemu with:<br />
<pre><br />
qemu -cdrom [[cdrom''image]] -boot d [[hd''image]]<br />
</pre><br />
or if you are on a x86_64 system (will avoid many problems afterwards):<br />
<pre><br />
qemu-system-x86_64 -cdrom [[cdrom''image]] -boot d [[hd''image]]<br />
</pre><br />
Now partition the virtual hard disk, format the partitions and install the OS.<br />
<br />
A few hints:<br />
# If you are using Windows 95 boot floppy, then choosing SAMSUNG as the type of CD-ROM seems to work<br />
# There are problems when installing Windows 2000. Windows setup will generate a lot of edb*.log files, one after the other containing nothing but blank spaces in C:\WINNT\SECURITY which quickly fill the virtual harddisk. A workaround is to open a Windows command prompt as early as possible during setup (by pressing Shift-F10) which will allow you to remove these log files as they appear by typing :<br />
<pre><br />
del %windir%\security\*.log<br />
</pre><br />
<br />
NOTE: According to the official QEMU website, "Windows 2000 has a bug which gives a disk full problem during its installation. When installing it, use the `-win2k-hack' QEMU option to enable a specific workaround. After Windows 2000 is installed, you no longer need this option (this option slows down the IDE transfers)."<br />
<br />
=== Running the system===<br />
<br />
To run the system simply type:<br />
<pre><br />
qemu [hd_image]<br />
</pre><br />
<br />
A good idea is to use overlay images. This way you can create hard disk image once and tell Qemu to store changes in external file.<br />
You get rid of all the instability, because it is so easy to revert to previous system state :)<br />
<br />
To create an overlay image, type:<br />
<pre><br />
qemu-img create -b [[base''image]] -f qcow [[overlay''image]]<br />
</pre>2<br />
Substitute the hard disk image for base_image (in our case win.qcow). After that you can run qemu with:<br />
<pre><br />
qemu [overlay_image]<br />
</pre><br />
or if you are on a x86_64 system:<br />
<pre><br />
qemu-system-x86_64 [overlay_image]<br />
</pre><br />
and the original image will be left untouched. One hitch, the base image cannot be renamed or moved, the overlay remembers the base's full path.<br />
<br />
=== Moving data between host and guest OS===<br />
<br />
If you have servers on your host OS they will be accessible with the ip-address 10.0.2.2 without any further configuration. So you could just FTP or SSH, etc to 10.0.2.2 from windows to share data, or if you would like to use samba:<br />
<br />
==== Samba====<br />
<br />
Qemu supports SAMBA which allows you to mount host directories during the emulation. It seems that there is an incompatibility with SAMBA 3.x. and some versions of qemu. But at least with a current snapshot of qemu it should be working.<br />
<br />
First, you need to have a working samba installation. Then add the following section to your smb.conf:<br />
<pre><br />
[qemu]<br />
comment = Temporary file space<br />
path = /tmp<br />
read only = no<br />
public = yes<br />
</pre><br />
<br />
Now start qemu with:<br />
<pre><br />
qemu [hd_image] -smb qemu<br />
</pre><br />
<br />
Then you should be able to access your host's smb-server with the ip-address 10.0.2.2. If you're running Win9x as guest OS, you may need to add<br />
<pre><br />
10.0.2.2 smbserver<br />
</pre><br />
to c:\windows\lmhosts (Win9x has Lmhosts.sam as a SAMple, rename it!).<br />
<br />
==== Mounting the hard disk image====<br />
<br />
Fortunately there is a way to mount the hard disk image with a loopback device. Login as root, make a temporary directory and mount the image with the command:<br />
<pre><br />
mount -o loop,offset=32256 [[hd''image]] [[tmp''dir]]<br />
</pre><br />
Now you can copy data in both directions. When you are done, umount with:<br />
<pre><br />
umount [hd_image]<br />
</pre><br />
The drawback of this solution is that you cannot use it with qcow images (including overlay images). So you need to create you images without \"-f qcow\" option. Tip: create a second, raw harddrive image. This way you'll be able to transfer data easily and use qcow overlay images for the primary drive.<br />
<br />
REMEMBER: never run qemu when the image is mounted!<br />
<br />
==== Using any real partition as the single primary partition of a hard disk image ====<br />
<br />
Sometimes, you may wish to use one of your system partition from within qemu (for instance, if you wish booting both your real machine or qemu using a given partition as root). You can do this using software RAID in linear mode (you need the linear.ko kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a qemu raw disk image.<br />
<br />
Suppose you have a plain, unmounted /dev/hdaN partition with some filesystem on it you wish to make part of a qemu disk image. First, you create some small file to hold the MBR:<br />
<br />
dd if=/dev/zero of=/path/to/mbr count=32<br />
<br />
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:<br />
<br />
losetup -f /path/to/mbr<br />
<br />
Let's assume the resulting device is /dev/loop0, because we woudn't already have been using other loopbacks. Next step is to create the "merged" MBR + /dev/hdaN disk image using software RAID:<br />
<br />
modprobe linear<br />
mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hdaN<br />
<br />
The resulting /dev/md0 is what you will use as a qemu raw disk image (don't forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of /dev/hdaN inside /dev/md0 (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using fdisk on the host machine, not in the emulator: the default raw disc detection routine from qemu often results in non kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:<br />
<br />
fdisk /dev/md0<br />
<br />
Press 'x' to enter the expert menu. Set number of 's'ectors per track so that the size of one<br />
cylinder matches the size of your mbr file. For two heads and the sector size is 512, the number of<br />
sectors per track should be 16, so we get cylinders of size 2x16x512=16k. Now, press 'r' to return<br />
to the main menu. Press 'p' and check that now the cylinder size is 16k.<br />
Now, create a single primary partition corresponding to /dev/hdaN. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk. Finally, 'w'rite the result to the file: you are done. You know have a partition you can mount directly from your host, as well as part of a qemu disk image: <br />
<br />
qemu -hdc /dev/md0 [...]<br />
<br />
You can of course safely set any bootloader on this disk image using qemu, provided the original /boot/hdaN partition contains the necessary tools.<br />
<br />
=== Optimizing Windows 9x CPU usage===<br />
<br />
Windows 9x doesn't use hlt instruction, so the emulator always eats up 100% CPU even if no computation is being done. Grab the file http://www.user.cityline.ru/~maxamn/amnhltm.zip, unpack it, copy it to the image and run the .bat file.<br />
<br />
=== Using the QEmu Accelerator Module===<br />
<br />
The developers of qemu have created an optional kernel module to accelerate qemu to sometimes near native levels. This should be loaded with the option<br />
major=0<br />
to automate the creation of the required /dev/kqemu device. The following command<br />
echo "options kqemu major=0" >> /etc/modprobe.conf<br />
will amend modprobe.conf to ensure that the module option is added every time the module is loaded.<br />
<br />
Append kqemu to the list of modules in /etc/rc.conf to have it loaded the next time the your system starts. To load it now without rebooting, do the following as root<br />
modprobe kqemu<br />
<br />
If you are using Linux, Windows 2000 or Windows XP as guest OS, start qemu with the command line option<br />
qemu [...] -kernel-kqemu<br />
or, if you are on a x86_64 system (will not work otherwise):<br />
qemu-system-x86_64 [...] -kernel-kqemu<br />
This enables full virtualization and thus improves speed considerably.<br />
<br />
=== Using the Kernel-based Virtual Machine ===<br />
<br />
[[Kvm]] is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. Using [[Kvm]], one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.<br />
<br />
This technology requires an x86 machine running a recent Linux kernel on an Intel processor with VT (virtualization technology) extensions, or an AMD processor with SVM extensions. It is included in the mainline linux kernel since 2.6.20.<br />
<br />
Qemu package now provides a qemu-kvm executable that take advantage of this technology and the arch kernel now provide the required modules.<br />
<br />
To take advantage of [[Kvm]], you simply need a compatible processor (the following command must return something on screen)<br />
<br />
egrep '^flags.*(vmx|svm)' /proc/cpuinfo<br />
<br />
And load the appropriate module from your rc.conf<br />
<br />
* For Intel® processors add this module to your MODULES array in /etc/rc.conf<br />
<br />
kvm-intel<br />
<br />
* for AMD® processors add this module to your MODULES array in /etc/rc.conf<br />
<br />
kvm-amd<br />
<br />
Also, you will need to yourself to the group 'kvm'.<br />
<br />
===Basic Networking===<br />
To add basic networking to your virtual machine, you may have to simply load qemu with those options: -net nic,vlan=1 -net user,vlan=1 . For example, to load a cdrom, you could use:<br />
qemu -kernel-kqemu -no-acpi -net nic,vlan=1 -net user,vlan=1 -cdrom dsl-4.3rc1.iso<br />
<br />
=== Tap Networking with QEMU ===<br />
<br />
==== Basic Idea ====<br />
<br />
Tap networking in QEMU lets virtual machines register themselves as though with separate ethernet adapters and have their traffic bridged directly onto your local area network. This is sometimes very desireable, if you want your virtual machines to be able to talk to each other, or for other machines on your LAN to talk to virtual machines.<br />
<br />
==== Security Warning ====<br />
<br />
You probably <b>should not</b> use this networking method if your host Arch machine is directly on the Internet. It can expose your virtual machines directly to attack!<br />
<br />
In general, Arch disclaims any responsibility for security implications (or implications of any kind, really) from following these instructions.<br />
<br />
==== Nitty Gritty ====<br />
<br />
To set all this up, you'll need to install the following packages:<br />
bridge-utils (for brctl, to manipulate bridges)<br />
uml_utilities (for tunctl, to manipulate taps)<br />
sudo (for manipulating bridges and tunnels as root)<br />
<br />
Then you need to take the following steps:<br />
<br />
1. Replace your normal ethernet adapter with a bridge adapter and bind your normal ethernet adapter to it. First install the bridging module:<br />
<br />
# modprobe bridge<br />
<br />
2. Configure your bridge <code>br0</code> to have your real ethernet adapter (herein assumed <code>eth0</code>) in it, in <code>/etc/conf.d/bridges</code>:<br />
bridge_br0="eth0"<br />
BRIDGE_INTERFACES=(br0)<br />
<br />
3. Change your networking configuration so that you just bring up your real ethernet adapter without configuring it, allowing real configuration to happen on the bridge interface. In <code>/etc/rc.conf</code>:<br />
<br />
eth0="eth0 up"<br />
br0="dhcp"<br />
INTERFACES=(eth0 br0)<br />
<br />
Remember, especially if you're doing DHCP, it's essential that the bridge comes up AFTER the real adapter, otherwise the bridge won't be able to talk to anything to get a DHCP address!<br />
<br />
4. QEMU is going to create "tap" adapters (virtualized network adapters), add them to the bridge as virtual machines are brought up, and remove them when done. First you need to make sure your kvm users have access to the tunnel device. Add to your <code>/etc/udev/rules.d/65-kvm.rules</code> file:<br />
<br />
KERNEL=="tun", NAME="net/%k", GROUP="kvm", MODE="0660"<br />
<br />
Then do the following so udev will pick up the new rule:<br />
<br />
# killall -HUP udevd<br />
<br />
4. Install the tunnel/tap module:<br />
<br />
# modprobe tun<br />
<br />
5. Install the script that QEMU uses to bring up the tap adapter in <code>/etc/qemu-ifup</code> with root:kvm 750 permissions:<br />
<br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifup"<br />
echo "Bringing up $1 for bridged mode..."<br />
sudo /sbin/ifconfig $1 0.0.0.0 promisc up<br />
echo "Adding $1 to br0..."<br />
sudo /usr/sbin/brctl addif br0 $1<br />
sleep 2<br />
<br />
6. Use <code>visudo</code> to add the following to your <code>sudoers</code> file:<br />
<br />
Cmnd_Alias QEMU=/sbin/ifconfig,/sbin/modprobe,/usr/sbin/brctl,/usr/bin/tunctl<br />
%kvm ALL=NOPASSWD: QEMU<br />
<br />
7. Make sure the user(s) wishing to use this new functionality are in the kvm group. Exit and log in again if necessary.<br />
<br />
8. You launch qemu using the following <code>run-qemu</code> script:<br />
<br />
USERID=`whoami`<br />
IFACE=`sudo tunctl -b -u $USERID`<br />
<br />
qemu-kvm -net nic -net tap,ifname="$IFACE" $*<br />
<br />
sudo tunctl -d $IFACE &> /dev/null<br />
<br />
9. Add <code>bridge</code> and <code>tun</code> to your <code>MODULES</code> line in <code>/etc/rc.conf</code>.<br />
<br />
=== Front-ends for Qemu ===<br />
<br />
There are a few GUI Front-ends for Qemu:<br />
<br />
* community/qemu-launcher<br />
* community/qemulator<br />
* community/qtemu<br />
<br />
===Keyboard seems broken / Arrow keys don't work===<br />
<br />
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. Thay can be found in /usr/share/qemu/keymaps.<br />
<br />
<pre><br />
qemu -k [keymap] [disk_image]<br />
</pre><br />
<br />
=== External links ===<br />
* [http://kidsquid.com/cgi-bin/moin.cgi/QEMUMenu QEMUMenu for Windows].<br />
* [http://mychael.gotdns.com/blog/2006/12/14/qemu-setup/ Network bridging setup for QEMU]</div>Veoxhttps://wiki.archlinux.org/index.php?title=Talk:Lisp_package_guidelines&diff=65751Talk:Lisp package guidelines2009-03-26T10:05:36Z<p>Veox: </p>
<hr />
<div>== Package Naming ==<br />
<br />
I've added Alexandria to AUR as 'cl-alexandria', because 'alexandria' is already taken by a book collection managing application. It is clear that the directory in /usr/share/common-lisp/ should be named 'alexandria' and not 'cl-alexandria', otherwise code written with it would not be compatible with other distros.<br />
<br />
Should we be naming all CL packages with a 'cl-' prefix, or only those that "already" have it (like cl-ppcre and cl-who)?<br />
<br />
-- [[User:Veox|Veox]] 06:05, 26 March 2009 (EDT)</div>Veoxhttps://wiki.archlinux.org/index.php?title=Talk:Lisp_package_guidelines&diff=65750Talk:Lisp package guidelines2009-03-26T10:05:21Z<p>Veox: Created page with '== Package Naming == I've added Alexandria to AUR as 'cl-alexandria', because 'alexandria' is already taken by a book collection managing application. It is clear that the direc...'</p>
<hr />
<div>== Package Naming ==<br />
<br />
I've added Alexandria to AUR as 'cl-alexandria', because 'alexandria' is already taken by a book collection managing application. It is clear that the directory in /usr/share/common-lisp/ should be named 'alexandria' and not 'cl-alexandria', otherwise code written with it would not be compatible with other distros.<br />
<br />
Should we be naming all CL packages with a 'cl-' prefix, or only those that "already" have it (like cl-ppcre and cl-who)?<br />
<br />
[[User:Veox|Veox]] 06:05, 26 March 2009 (EDT)</div>Veox