https://wiki.archlinux.org/api.php?action=feedcontributions&user=Peoro&feedformat=atomArchWiki - User contributions [en]2024-03-29T02:23:53ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=PostgreSQL&diff=599390PostgreSQL2020-02-27T20:25:26Z<p>Peoro: Second time having problem with upgrading when PGSQL was no longer working (because it was too old and its deps were updated to an incompatible version).</p>
<hr />
<div>[[Category:Relational DBMSs]]<br />
[[it:PostgreSQL]]<br />
[[ja:PostgreSQL]]<br />
[[ru:PostgreSQL]]<br />
[[zh-hans:PostgreSQL]]<br />
{{Related articles start}}<br />
{{Related|PhpPgAdmin}}<br />
{{Related articles end}}<br />
[https://www.postgresql.org/ PostgreSQL] is an open source, community driven, standard compliant object-relational database system.<br />
<br />
== Installation ==<br />
<br />
{{Style|Don't duplicate [[sudo]] and [[su]].}}<br />
<br />
[[Install]] the {{Pkg|postgresql}} package. It will also create a system user called ''postgres''.<br />
<br />
{{Warning|See [[#Upgrading PostgreSQL]] for necessary steps before installing new versions of the PostgreSQL packages.}}<br />
<br />
{{Note|Commands that should be run as the ''postgres'' user are prefixed by {{ic|[postgres]$}} in this article.}}<br />
<br />
You can switch to the PostgreSQL user by executing the following command:<br />
<br />
* If you have [[sudo]] and are in [[sudoers]]:<br />
<br />
:{{bc|$ sudo -iu postgres}}<br />
<br />
* Otherwise using [[su]]:<br />
<br />
:{{bc|<nowiki><br />
$ su<br />
# su -l postgres<br />
</nowiki>}}<br />
<br />
See {{man|8|sudo}} or {{man|1|su}} for their usage.<br />
<br />
== Initial configuration ==<br />
<br />
Before PostgreSQL can function correctly, the database cluster must be initialized:<br />
<br />
[postgres]$ initdb -D /var/lib/postgres/data<br />
<br />
Where {{ic|-D}} is the default location where the database cluster must be stored (see [[#Change default data directory]] if you want to use a different one).<br />
<br />
Note that by default, the locale and the encoding for the database cluster are derived from your current environment (using [[Locale#LANG: default locale|$LANG]] value). [https://www.postgresql.org/docs/current/static/locale.html]<br />
However, depending on your settings and use cases this might not be what you want, and you can override the defaults using:<br />
<br />
* {{ic|1=--locale=''locale''}}, where ''locale'' is to be chosen amongst the system's [[Locale#Generating locales|available locales]];<br />
* {{ic|-E ''encoding''}} for the encoding (which must match the chosen locale);<br />
<br />
Example:<br />
<br />
[postgres]$ initdb --locale=en_US.UTF-8 -E UTF8 -D /var/lib/postgres/data<br />
<br />
Many lines should now appear on the screen with several ending by {{ic|... ok}}:<br />
<br />
{{bc|<br />
The files belonging to this database system will be owned by user "postgres".<br />
This user must also own the server process.<br />
<br />
The database cluster will be initialized with locale "en_US.UTF-8".<br />
The default database encoding has accordingly been set to "UTF8".<br />
The default text search configuration will be set to "english".<br />
<br />
Data page checksums are disabled.<br />
<br />
fixing permissions on existing directory /var/lib/postgres/data ... ok<br />
creating subdirectories ... ok<br />
selecting default max_connections ... 100<br />
selecting default shared_buffers ... 128MB<br />
selecting dynamic shared memory implementation ... posix<br />
creating configuration files ... ok<br />
running bootstrap script ... ok<br />
performing post-bootstrap initialization ... ok<br />
syncing data to disk ... ok<br />
<br />
WARNING: enabling "trust" authentication for local connections<br />
You can change this by editing pg_hba.conf or using the option -A, or<br />
--auth-local and --auth-host, the next time you run initdb.<br />
<br />
Success. You can now start the database server using:<br />
<br />
pg_ctl -D /var/lib/postgres/ -l logfile start<br />
}}<br />
<br />
If these are the kind of lines you see, then the process succeeded. Return to the regular user using {{ic|exit}}.<br />
<br />
{{Note|To read more about this {{ic|WARNING}}, see [[#Restricts access rights to the database superuser by default]].}}<br />
<br />
{{Tip|If you change the root to something other than {{ic|/var/lib/postgres}}, you will have to [[edit]] the service file. If the root is under {{ic|home}}, make sure to set {{ic|ProtectHome}} to false.}}<br />
<br />
{{Warning|<br />
* If the database resides on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any database.<br />
* If the database resides on a [[ZFS]] file system, you should consult [[ZFS#Databases]] before creating any database.<br />
}}<br />
<br />
Finally, [[start]] and [[enable]] the {{ic|postgresql.service}}.<br />
<br />
== Create your first database/user ==<br />
<br />
{{Tip|If you create a PostgreSQL user with the same name as your Linux username, it allows you to access the PostgreSQL database shell without having to specify a user to login (which makes it quite convenient).}}<br />
<br />
Become the postgres user. Add a new database user using the [https://www.postgresql.org/docs/current/static/app-createuser.html createuser] command:<br />
<br />
[postgres]$ createuser --interactive<br />
<br />
Create a new database over which the above user has read/write privileges using the [https://www.postgresql.org/docs/current/static/app-createdb.html createdb] command (execute this command from your login shell if the database user has the same name as your Linux user, otherwise add {{ic|-O ''database-username''}} to the following command):<br />
<br />
$ createdb myDatabaseName<br />
<br />
{{Tip|If you did not grant your new user database creation privileges, add {{ic|-U postgres}} to the previous command.}}<br />
<br />
== Familiarize with PostgreSQL ==<br />
<br />
=== Access the database shell ===<br />
<br />
Become the postgres user. Start the primary database shell, [https://www.postgresql.org/docs/current/static/app-psql.html psql], where you can do all your creation of databases/tables, deletion, set permissions, and run raw SQL commands. Use the {{ic|-d}} option to connect to the database you created (without specifying a database, {{ic|psql}} will try to access a database that matches your username).<br />
<br />
[postgres]$ psql -d myDatabaseName<br />
<br />
Some helpful commands:<br />
<br />
Get help:<br />
<br />
=> \help<br />
<br />
Connect to a particular database:<br />
<br />
=> \c <database><br />
<br />
List all users and their permission levels:<br />
<br />
=> \du<br />
<br />
Show summary information about all tables in the current database:<br />
<br />
=> \dt<br />
<br />
Exit/quit the {{ic|psql}} shell:<br />
<br />
=> \q or CTRL+d<br />
<br />
There are of course many more meta-commands, but these should help you get started. To see all meta-commands run: <br />
<br />
=> \?<br />
<br />
== Optional configuration ==<br />
<br />
The PostgreSQL database server configuration file is {{ic|postgresql.conf}}. This file is located in the data directory of the server, typically {{ic|/var/lib/postgres/data}}. This folder also houses the other main configuration files, including the {{ic|pg_hba.conf}} which defines authentication settings, for both [[#Restricts access rights to the database superuser by default|local users]] and [[#Configure PostgreSQL to be accessible from remote hosts|other hosts ones]].<br />
<br />
{{Note|By default, this folder will not be browsable or searchable by a regular user. This is why {{ic|find}} and {{ic|locate}} are not finding the configuration files.}}<br />
<br />
=== Restricts access rights to the database superuser by default ===<br />
<br />
The defaults {{ic|pg_hba.conf}} '''allow any local user to connect as any database user''', including the database superuser.<br />
This is likely not what you want, so in order to restrict global access to the ''postgres'' user, change the following line:<br />
<br />
{{hc|/var/lib/postgres/data/pg_hba.conf|2=<br />
# TYPE DATABASE USER ADDRESS METHOD<br />
<br />
# "local" is for Unix domain socket connections only<br />
local all all trust<br />
}}<br />
<br />
To:<br />
<br />
{{hc|/var/lib/postgres/data/pg_hba.conf|2=<br />
# TYPE DATABASE USER ADDRESS METHOD<br />
<br />
# "local" is for Unix domain socket connections only<br />
local all postgres peer<br />
}}<br />
<br />
You might later add additional lines depending on your needs or software ones.<br />
<br />
=== Configure PostgreSQL to be accessible exclusively through UNIX Sockets ===<br />
<br />
In the connections and authentications section of your configuration, set:<br />
<br />
{{hc|/var/lib/postgres/data/postgresql.conf|2=<br />
listen_addresses = <nowiki>''</nowiki><br />
}}<br />
<br />
This will disable network listening completely.<br />
After this you should [[restart]] {{ic|postgresql.service}} for the changes to take effect.<br />
<br />
=== Configure PostgreSQL to be accessible from remote hosts ===<br />
<br />
In the connections and authentications section, set the {{ic|listen_addresses}} line to your needs:<br />
<br />
{{hc|/var/lib/postgres/data/postgresql.conf|2=<br />
listen_addresses = 'localhost,''my_local_ip_address'''<br />
}}<br />
<br />
You can use {{ic|'*'}} to listen on all available addresses.<br />
<br />
{{Note|PostgreSQL uses TCP port {{ic|5432}} by default for remote connections. Make sure this port is open in your [[firewall]] and able to receive incoming connections. You can also change it in the configuration file, right below {{ic|listen_addresses}}}}<br />
<br />
Then add a line like the following to the authentication config:<br />
<br />
{{hc|/var/lib/postgres/data/pg_hba.conf|2=<br />
# TYPE DATABASE USER ADDRESS METHOD<br />
# IPv4 local connections:<br />
host all all ''ip_address''/32 md5<br />
}}<br />
<br />
where {{ic|''ip_address''}} is the IP address of the remote client.<br />
<br />
See the documentation for [https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html pg_hba.conf].<br />
<br />
{{Note|Neither sending your plain password nor the md5 hash (used in the example above) over the Internet is secure if it is not done over an SSL-secured connection. See [https://www.postgresql.org/docs/current/static/ssl-tcp.html Secure TCP/IP Connections with SSL] for how to configure PostgreSQL with SSL.}}<br />
<br />
After this you should [[restart]] {{ic|postgresql.service}} for the changes to take effect.<br />
<br />
For troubleshooting take a look in the server log file:<br />
<br />
$ journalctl -u postgresql.service<br />
<br />
=== Configure PostgreSQL authenticate against PAM ===<br />
<br />
PostgreSQL offers a number of authentication methods. If you would like to allow users to authenticate with their system password, additional steps are necessary. First you need to enable [[PAM]] for the connection.<br />
<br />
For example, the same configuration as above, but with PAM enabled:<br />
<br />
{{hc|/var/lib/postgres/data/pg_hba.conf|2=<br />
# IPv4 local connections:<br />
host all all ''my_remote_client_ip_address''/32 pam<br />
}}<br />
<br />
The PostgreSQL server is however running without root privileges and will not be able to access {{ic|/etc/shadow}}. We can work around that by allowing the postgres group to access this file:<br />
<br />
# setfacl -m g:postgres:r /etc/shadow<br />
<br />
=== Change default data directory ===<br />
<br />
The default directory where all your newly created databases will be stored is {{ic|/var/lib/postgres/data}}. To change this, follow these steps:<br />
<br />
Create the new directory and make the postgres user its owner:<br />
<br />
# mkdir -p /pathto/pgroot/data<br />
# chown -R postgres:postgres /pathto/pgroot<br />
<br />
Become the postgres user, and initialize the new cluster:<br />
<br />
[postgres]$ initdb -D /pathto/pgroot/data<br />
<br />
[[Edit]] {{ic|postgresql.service}} to create a drop-in file and override the {{ic|Environment}} and {{ic|PIDFile}} settings. For example:<br />
<br />
[Service]<br />
Environment=PGROOT=''/pathto/pgroot''<br />
PIDFile=''/pathto/pgroot/''data/postmaster.pid<br />
<br />
If you want to use {{ic|/home}} directory for default directory or for tablespaces, add one more line in this file:<br />
<br />
ProtectHome=false<br />
<br />
=== Change default encoding of new databases to UTF-8 ===<br />
<br />
{{Note|If you ran {{ic|initdb}} with {{ic|-E UTF8}} or while using an UTF-8 locale, these steps are not required.}}<br />
<br />
When creating a new database (e.g. with {{ic|createdb blog}}) PostgreSQL actually copies a template database. There are two predefined templates: {{ic|template0}} is vanilla, while {{ic|template1}} is meant as an on-site template changeable by the administrator and is used by default. In order to change the encoding of a new database, one of the options is to change on-site {{ic|template1}}. To do this, log into PostgreSQL shell ({{ic|psql}}) and execute the following:<br />
<br />
First, we need to drop {{ic|template1}}. Templates cannot be dropped, so we first modify it so it is an ordinary database:<br />
<br />
UPDATE pg_database SET datistemplate = FALSE WHERE datname = 'template1';<br />
<br />
Now we can drop it:<br />
<br />
DROP DATABASE template1;<br />
<br />
The next step is to create a new database from {{ic|template0}}, with a new default encoding:<br />
<br />
CREATE DATABASE template1 WITH TEMPLATE = template0 ENCODING = 'UNICODE';<br />
<br />
Now modify {{ic|template1}} so it is actually a template:<br />
<br />
UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template1';<br />
<br />
Optionally, if you do not want anyone connecting to this template, set {{ic|datallowconn}} to {{ic|FALSE}}:<br />
<br />
UPDATE pg_database SET datallowconn = FALSE WHERE datname = 'template1';<br />
<br />
{{Note|This last step can create problems when upgrading via {{ic|pg_upgrade}}.}}<br />
<br />
Now you can create a new database:<br />
<br />
[postgres]$ createdb blog<br />
<br />
If you log back in to {{ic|psql}} and check the databases, you should see the proper encoding of your new database:<br />
<br />
{{hc|\l|<nowiki><br />
List of databases<br />
Name | Owner | Encoding | Collation | Ctype | Access privileges<br />
-----------+----------+-----------+-----------+-------+----------------------<br />
blog | postgres | UTF8 | C | C |<br />
postgres | postgres | SQL_ASCII | C | C |<br />
template0 | postgres | SQL_ASCII | C | C | =c/postgres<br />
: postgres=CTc/postgres<br />
template1 | postgres | UTF8 | C | C |<br />
</nowiki>}}<br />
<br />
== Graphical tools ==<br />
<br />
* {{App|[[phpPgAdmin]]|Web-based administration tool for PostgreSQL.|http://phppgadmin.sourceforge.net|{{Pkg|phppgadmin}}}}<br />
* {{App|pgAdmin|Comprehensive design and management GUI for PostgreSQL.|https://www.pgadmin.org/|{{AUR|pgadmin3}} or {{Pkg|pgadmin4}}}}<br />
* {{App|pgModeler|Graphical schema designer for PostgreSQL.|https://pgmodeler.io/|{{AUR|pgmodeler}}}}<br />
<br />
For tools supporting multiple DBMSs, see [[List of applications/Documents#Database tools]].<br />
<br />
== Upgrading PostgreSQL ==<br />
<br />
{{Style|Don't show basic systemctl commands, etc.}}<br />
{{Expansion|How to upgrade when using third party extensions?|section=pg_upgrade problem if extensions (like postgis) are used}}<br />
<br />
Upgrading major PostgreSQL versions requires some extra maintenance.<br />
<br />
{{Note|<br />
* Official PostgreSQL [https://www.postgresql.org/docs/current/static/upgrading.html upgrade documentation] should be followed.<br />
* From version {{ic|10.0}} onwards PostgreSQL [https://www.postgresql.org/about/news/1786/ changed its versioning scheme]. Earlier upgrade from version {{ic|9.''x''}} to {{ic|9.''y''}} was considered as major upgrade. Now upgrade from version {{ic|10.''x''}} to {{ic|10.''y''}} is considered as minor upgrade and upgrade from version {{ic|10.''x''}} to {{ic|11.''y''}} is considered as major upgrade.<br />
}}<br />
<br />
{{Warning|The following instructions could cause data loss. Do not run the commands below blindly, without understanding what they do. [https://www.postgresql.org/docs/current/static/backup.html Backup database] first.}}<br />
<br />
Get the currently used database version via<br />
<br />
# cat /var/lib/postgres/data/PG_VERSION<br />
<br />
To ensure you do not accidentally upgrade the database to an incompatible version, it is recommended to [[pacman#Skip package from being upgraded|skip updates]] to the PostgreSQL packages:<br />
<br />
{{hc|/etc/pacman.conf|2=<br />
...<br />
IgnorePkg = postgresql postgresql-libs<br />
...<br />
}}<br />
<br />
Minor version upgrades are safe to perform. However, if you do an accidental upgrade to a different major version, you might not be able to access any of your data. Always check the [https://www.postgresql.org/ PostgreSQL home page] to be sure of what steps are required for each upgrade. For a bit about why this is the case, see the [https://www.postgresql.org/support/versioning versioning policy].<br />
<br />
There are two main ways to upgrade your PostgreSQL database. Read the official documentation for details.<br />
<br />
=== pg_upgrade ===<br />
<br />
For those wishing to use {{ic|pg_upgrade}}, a {{Pkg|postgresql-old-upgrade}} package is available that will always run one major version behind the real PostgreSQL package. This can be installed side-by-side with the new version of PostgreSQL. To upgrade from older versions of PostgreSQL there are AUR packages available: {{AUR|postgresql-96-upgrade}}, {{AUR|postgresql-95-upgrade}}, {{AUR|postgresql-94-upgrade}}, {{AUR|postgresql-93-upgrade}}, {{AUR|postgresql-92-upgrade}}. Read the {{man|1|pg_upgrade}} man page to understand what actions it performs.<br />
<br />
Note that the databases cluster directory does not change from version to version, so before running {{ic|pg_upgrade}}, it is necessary to rename your existing data directory and migrate into a new directory. The new databases cluster must be initialized, as described in the [[#Installation]] section.<br />
<br />
When you are ready, stop the postgresql service, upgrade the following packages: {{Pkg|postgresql}}, {{Pkg|postgresql-libs}}, and {{Pkg|postgresql-old-upgrade}}. Finally upgrade the databases cluster.<br />
<br />
Stop and make sure PostgreSQL is stopped:<br />
<br />
# systemctl stop postgresql.service<br />
# systemctl status postgresql.service<br />
<br />
Make sure that PostgresSQL was stopped correctly. If it failed, {{ic|pg_upgrade}} will fail too.<br />
<br />
Upgrade the packages:<br />
<br />
# pacman -S postgresql postgresql-libs postgresql-old-upgrade<br />
<br />
Rename the databases cluster directory, and create an empty one:<br />
<br />
# mv /var/lib/postgres/data /var/lib/postgres/olddata<br />
# mkdir /var/lib/postgres/data /var/lib/postgres/tmp<br />
# chown postgres:postgres /var/lib/postgres/data /var/lib/postgres/tmp<br />
[postgres]$ cd /var/lib/postgres/tmp<br />
[postgres]$ initdb -D /var/lib/postgres/data<br />
<br />
Upgrade the cluster, replacing {{ic|''PG_VERSION''}} below, with the old PostgreSQL version number (e.g. {{ic|11}}):<br />
<br />
[postgres]$ pg_upgrade -b /opt/pgsql-''PG_VERSION''/bin -B /usr/bin -d /var/lib/postgres/olddata -D /var/lib/postgres/data<br />
<br />
{{ic|pg_upgrade}} will perform the upgrade and create some scripts in {{ic|/var/lib/postgres/tmp/}}. Follow the instructions given on screen and act accordingly. You may delete the {{ic|/var/lib/postgres/tmp}} directory once the upgrade is completely over.<br />
<br />
If necessary, adjust the configuration files of new cluster (e.g. {{ic|pg_hba.conf}} and {{ic|postgresql.conf}}) to match the old cluster.<br />
<br />
Start the cluster:<br />
<br />
# systemctl start postgresql.service<br />
<br />
=== Manual dump and reload ===<br />
<br />
You could also do something like this (after the upgrade and install of {{Pkg|postgresql-old-upgrade}}).<br />
<br />
{{Note|<br />
* Below are the commands for upgrading from PostgreSQL 11. You can find similar commands in {{ic|/opt/}} for your version of PostgreSQL cluster, provided you have matching version of {{Pkg|postgresql-old-upgrade}} package installed.<br />
* If you had customized your {{ic|pg_hba.conf}} file, you may have to temporarily modify it to allow full access to old database cluster from local system. After upgrade is complete set your customization to new database cluster as well and [[restart]] {{ic|postgresql.service}}.<br />
}}<br />
<br />
# systemctl stop postgresql.service<br />
# mv /var/lib/postgres/data /var/lib/postgres/olddata<br />
# mkdir /var/lib/postgres/data<br />
# chown postgres:postgres /var/lib/postgres/data<br />
[postgres]$ initdb -D /var/lib/postgres/data<br />
[postgres]$ /opt/pgsql-11/bin/pg_ctl -D /var/lib/postgres/olddata/ start<br />
[postgres]$ pg_dumpall -h /tmp -f /tmp/old_backup.sql<br />
[postgres]$ /opt/pgsql-11/bin/pg_ctl -D /var/lib/postgres/olddata/ stop<br />
# systemctl start postgresql.service<br />
[postgres]$ psql -f /tmp/old_backup.sql postgres<br />
<br />
== Troubleshooting ==<br />
<br />
=== Improve performance of small transactions ===<br />
<br />
If you are using PostgresSQL on a local machine for development and it seems slow, you could try turning [https://www.postgresql.org/docs/current/static/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT synchronous_commit off] in the configuration. Beware of the [https://www.postgresql.org/docs/current/static/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT caveats], however.<br />
<br />
{{hc|/var/lib/postgres/data/postgresql.conf|2=<br />
synchronous_commit = off<br />
}}<br />
<br />
=== Prevent disk writes when idle ===<br />
<br />
PostgreSQL periodically updates its internal "statistics" file. By default, this file is stored on disk, which prevents disks from spinning down on laptops and causes hard drive seek noise. It is simple and safe to relocate this file to a memory-only file system with the following configuration option:<br />
<br />
{{hc|/var/lib/postgres/data/postgresql.conf|2=<br />
stats_temp_directory = '/run/postgresql'<br />
}}<br />
<br />
=== pgAdmin 4 issues after upgrade to PostgreSQL 12===<br />
<br />
If you see errors about {{ic|string indices must be integers}} when navigating the tree on the left, or about {{ic|column rel.relhasoids does not exist}} when viewing the data, remove the server from the connection list in pgAdmin and add a fresh server instance. pgAdmin will otherwise continue to treat the server as a PostgreSQL 11 server resulting in these issues.</div>Peorohttps://wiki.archlinux.org/index.php?title=Talk:Matrix&diff=562683Talk:Matrix2019-01-10T22:13:15Z<p>Peoro: Synapse 0.34.1.1 now supports Python 3.7</p>
<hr />
<div>== Switching to Python 3 ==<br />
<br />
The latest version of Synapse (0.34.0, the one currently available in community) supports Python 3 (although python 3.7 is still experimental).<br />
<br />
See their changelogs[https://github.com/matrix-org/synapse/blob/master/CHANGES.md#synapse-0340-2018-12-20] and usage notes[https://github.com/matrix-org/synapse/blob/master/UPGRADE.rst#upgrading-to-v0340].<br />
<br />
I think it would be ideal to start looking into migrating to Python 3.<br />
<br />
[[User:Peoro|Peoro]] ([[User talk:Peoro|talk]]) 21:39, 6 January 2019 (UTC)<br />
<br />
=== Python 3.7 about to become stable ===<br />
<br />
According to some people on the #matrix:matrix.org chat room, Python 3.7 is supported on the development branch. They're about to release a new version soon just for that.<br />
<br />
[[User:Peoro|Peoro]] ([[User talk:Peoro|talk]]) 21:58, 6 January 2019 (UTC)<br />
<br />
:Which is exactly what we need before migrating, since Arch is Python 3.7. ;) [[User:Archange|Archange]] ([[User talk:Archange|talk]]) 15:11, 7 January 2019 (UTC)<br />
<br />
::Synapse 0.34.1.1 was released, and Python 3.7 is now officially supported. [[User:Peoro|Peoro]] ([[User talk:Peoro|talk]]) 22:13, 10 January 2019 (UTC)</div>Peorohttps://wiki.archlinux.org/index.php?title=Talk:Matrix&diff=562121Talk:Matrix2019-01-06T21:58:33Z<p>Peoro: Updates on python 3.7 support</p>
<hr />
<div>== Switching to Python 3 ==<br />
<br />
The latest version of Synapse (0.34.0, the one currently available in community) supports Python 3 (although python 3.7 is still experimental).<br />
<br />
See their changelogs[https://github.com/matrix-org/synapse/blob/master/CHANGES.md#synapse-0340-2018-12-20] and usage notes[https://github.com/matrix-org/synapse/blob/master/UPGRADE.rst#upgrading-to-v0340].<br />
<br />
I think it would be ideal to start looking into migrating to Python 3.<br />
<br />
[[User:Peoro|Peoro]] ([[User talk:Peoro|talk]]) 21:39, 6 January 2019 (UTC)<br />
<br />
=== Python 3.7 about to become stable ===<br />
<br />
According to some people on the #matrix:matrix.org chat room, Python 3.7 is supported on the development branch. They're about to release a new version soon just for that.<br />
<br />
[[User:Peoro|Peoro]] ([[User talk:Peoro|talk]]) 21:58, 6 January 2019 (UTC)</div>Peorohttps://wiki.archlinux.org/index.php?title=Matrix&diff=562118Matrix2019-01-06T21:49:24Z<p>Peoro: Mentioning an ongoing discussion about running Synapse with python3</p>
<hr />
<div>[[Category:Instant messaging]]<br />
[[Category:Voice over IP]]<br />
[[ja:Matrix]]<br />
[https://matrix.org/ Matrix] is an ambitious new ecosystem for open federated instant messaging and VoIP. It consists of servers, clients and bridge software to connect to existing messaging solutions like IRC.<br />
<br />
== Installation ==<br />
<br />
The reference server implementation '''Synapse''' is available in the community repository as {{Pkg|matrix-synapse}}. The community package creates a ''synapse'' user. It still uses Python 2.7 (see the [[Talk:Matrix|Discussion]] about migrating to Python 3).<br />
<br />
== Configuration ==<br />
<br />
After installation, a configuration file needs to be generated. It should be readable by the ''synapse'' user:<br />
{{bc|<nowiki><br />
$ cd /var/lib/synapse<br />
$ sudo -u synapse python2 -m synapse.app.homeserver \<br />
--server-name my.domain.name \<br />
--config-path /etc/synapse/homeserver.yaml \<br />
--generate-config \<br />
--report-stats=yes<br />
</nowiki>}}<br />
<br />
Note that this will generate corresponding SSL keys and self-signed certificates for the specified server name. You have to regenerate those if you change the server name.<br />
<br />
== Service ==<br />
<br />
A systemd service named ''synapse.service'' will be installed by the matrix-synapse package. It will start the synapse server as user ''synapse'' and use the configuration file {{ic|/etc/synapse/homeserver.yaml}}.<br />
<br />
== User management ==<br />
<br />
You need at least one user on your fresh synapse server. You may create one as your normal non-root user with the command<br />
{{bc|$ register_new_matrix_user -c /etc/synapse/homeserver.yaml https://localhost:8448}}<br />
or using one of the [https://matrix.org/docs/projects/try-matrix-now.html matrix clients]<br />
<br />
== Spider Webcrawler ==<br />
To enable the webcrawler, for server generated link previews, the additional packages {{Pkg|python2-lxml}} and {{Pkg|python2-netaddr}} have to be installed.<br />
After that the config option {{ic|url_preview_enabled: True}} can be set in your {{ic|homeserver.yaml}}.<br />
To prevent the synapse server from issuing arbitrary GET requests to internal hosts the {{ic|url_preview_ip_range_blacklist:}} has to be set.<br />
{{Warning|There are no defaults! By default the synapse server can crawl all your internal hosts.}}<br />
There are some examples that can be uncommented.<br />
Add your local IP ranges to that list to prevent the synapse server from trying to crawl them.<br />
After changing the {{ic|homeserver.yaml}} the service has to be restarted.</div>Peorohttps://wiki.archlinux.org/index.php?title=Talk:Matrix&diff=562117Talk:Matrix2019-01-06T21:39:12Z<p>Peoro: Mentioning that Synapse now works with python3</p>
<hr />
<div>== Switching to Python 3 ==<br />
<br />
The latest version of Synapse (0.34.0, the one currently available in community) supports Python 3 (although python 3.7 is still experimental).<br />
<br />
See their changelogs[https://github.com/matrix-org/synapse/blob/master/CHANGES.md#synapse-0340-2018-12-20] and usage notes[https://github.com/matrix-org/synapse/blob/master/UPGRADE.rst#upgrading-to-v0340].<br />
<br />
I think it would be ideal to start looking into migrating to Python 3.<br />
<br />
[[User:Peoro|Peoro]] ([[User talk:Peoro|talk]]) 21:39, 6 January 2019 (UTC)</div>Peorohttps://wiki.archlinux.org/index.php?title=PCI_passthrough_via_OVMF&diff=444622PCI passthrough via OVMF2016-08-05T22:59:43Z<p>Peoro: Was affected by an issue (audio and hw acceleration going bad after a while), found a solution and added it to the troubleshooting section.</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[ja:OVMF による PCI パススルー]]<br />
The Open Virtual Machine Firmware ([http://www.tianocore.org/ovmf/ OVMF]) is a project to enable UEFI support for virtual machines. Starting with Linux 3.9 and recent versions of QEMU, it is now possible to passthrough a graphics card, offering the VM native graphics performance which is useful for graphic-intensive tasks.<br />
<br />
Provided you have a desktop computer with a spare GPU you can dedicate to the host (be it an integrated GPU or an old OEM card, the brands do not even need to match) and that your hardware supports it (see [[#Prerequisites]]), it is possible to have a VM of any OS with its own dedicated GPU and near-native performance. For more information on techniques see the background [http://www.linux-kvm.org/images/b/b3/01x09b-VFIOandYou-small.pdf presentation (pdf)]. <br />
<br />
== Prerequisites ==<br />
A VGA Passthrough relies on a number of technologies that are not ubiquitous as of today and might not be available on your hardware. You will not be able to do this on your machine unless the following requirements are met :<br />
<br />
* Your CPU must support hardware virtualization (for kvm) and IOMMU (for the passthrough itself)<br />
** [http://ark.intel.com/search/advanced?s=t&VTX=true&VTD=true List of compatible Intel CPUs (Intel VT-x and Intel VT-d)]<br />
** [http://support.amd.com/en-us/kb-articles/Pages/GPU120AMDRVICPUsHyperVWin8.aspx List of compatible AMD CPUs (AMD-V and AMD-Vi)]<br />
* Your motherboard must also support IOMMU<br />
** Both the chipset and the BIOS must support it. It is not always easy to tell at a glance whether or not this is the case, but there is a [http://wiki.xen.org/wiki/VTdHowTo fairly comprehensive list on the matter on the Xen wiki] as well as [[wikipedia:List_of_IOMMU-supporting_hardware|another one on Wikipedia]].<br />
* Your guest GPU ROM must support UEFI<br />
** If you can find [https://www.techpowerup.com/vgabios/ any ROM in this list] that applies to your specific GPU and is said to support UEFI, you are generally in the clear. If not, you might want to try anyway if you have a recent GPU.<br />
<br />
You will probably want to have a spare monitor (the GPU will not display anything if there is no screen plugged it and using a VNC or Spice connection will not help your performance), as well as a mouse and a keyboard you can pass to your VM. If anything goes wrong, you will at least have a way to control your host machine this way.<br />
<br />
==Setting up IOMMU==<br />
IOMMU is a system specific IO mapping mechanism and can be used with most devices. IOMMU is a generic name for Intel VT-x/Intel and AMD AMD-V/AMD-Vi.<br />
===Enabling IOMMU===<br />
Ensure that AMD-VI/VT-d is enabled in your BIOS settings. Both normally show up alongside other CPU features (meaning they could be in an overclocking-related menu) either with their actual names ("Vt-d" or "AMD-VI"), legacy names ("Vanderpool" for Vt-x, "Pacifica" for AMD-V) or in more ambiguous terms such as "Virtualization technology", which may or may not be explained in the manual.<br />
<br />
You will also have to enable iommu support in the kernel itself through a [[Kernel_parameters|bootloader kernel option]]. Depending on your type of CPU, use either {{ic|intel_iommu<nowiki>=</nowiki>on}} for Intel CPUs (VT-d) or {{ic|amd_iommu<nowiki>=</nowiki>on}} for AMD CPUs (AMD-Vi).<br />
<br />
After rebooting, check dmesg to confirm that IOMMU has been correctly enabled:<br />
{{hc|dmesg<nowiki>|</nowiki>grep -e DMAR -e IOMMU|<br />
[ 0.000000] ACPI: DMAR 0x00000000BDCB1CB0 0000B8 (v01 INTEL BDW 00000001 INTL 00000001)<br />
[ 0.000000] Intel-IOMMU: enabled<br />
[ 0.028879] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020660462 ecap f0101a<br />
[ 0.028883] dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap d2008c20660462 ecap f010da<br />
[ 0.028950] IOAPIC id 8 under DRHD base 0xfed91000 IOMMU 1<br />
[ 0.536212] DMAR: No ATSR found<br />
[ 0.536229] IOMMU 0 0xfed90000: using Queued invalidation<br />
[ 0.536230] IOMMU 1 0xfed91000: using Queued invalidation<br />
[ 0.536231] IOMMU: Setting RMRR:<br />
[ 0.536241] IOMMU: Setting identity map for device 0000:00:02.0 [0xbf000000 - 0xcf1fffff]<br />
[ 0.537490] IOMMU: Setting identity map for device 0000:00:14.0 [0xbdea8000 - 0xbdeb6fff]<br />
[ 0.537512] IOMMU: Setting identity map for device 0000:00:1a.0 [0xbdea8000 - 0xbdeb6fff]<br />
[ 0.537530] IOMMU: Setting identity map for device 0000:00:1d.0 [0xbdea8000 - 0xbdeb6fff]<br />
[ 0.537543] IOMMU: Prepare 0-16MiB unity mapping for LPC<br />
[ 0.537549] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]<br />
[ 2.182790] [drm] DMAR active, disabling use of stolen memory}}<br />
<br />
===Ensuring that the groups are valid===<br />
<br />
The following command will allow you to see how your various PCI devices are mapped to IOMMU groups. If it does not return anything, you either have not enabled IOMMU support properly or your hardware does not support it.<br />
{{hc|$ for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d); do echo "IOMMU group $(basename "$iommu_group")"; for device in $(ls -1 "$iommu_group"/devices/); do echo -n $'\t'; lspci -nns "$device"; done; done|<br />
IOMMU group 0<br />
00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/Ivy Bridge DRAM Controller [8086:0158] (rev 09)<br />
IOMMU group 1<br />
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09)<br />
IOMMU group 2<br />
00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:0e31] (rev 04)<br />
IOMMU group 4<br />
00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:0e2d] (rev 04)<br />
IOMMU group 5<br />
00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:0e20] (rev 04)<br />
IOMMU group 10<br />
00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:0e26] (rev 04)<br />
IOMMU group 13<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)}}<br />
<br />
An IOMMU group is the smallest set of physical devices that can be passed to a virtual machine. For instance, in the example above, both the GPU in 06:00.0 and its audio controller in 6:00.1 belong to IOMMU group 13 and can only be passed together. The frontal USB controller, however, has its own group (group 2) which is separate from both the USB expansion controller (group 10) and the rear USB controller (group 4), meaning that [[#Passing_through_a_USB_controller|any of them could be passed to a VM without affecting the others]].<br />
<br />
===Gotchas===<br />
====Plugging your guest GPU in an unisolated CPU-based PCIe slot====<br />
Not all PCI-E slots are the same. Most motherboards have PCIe slots provided by both the CPU and the PCH. Depending on your CPU, it is possible that your processor-based PCIe slot does not support isolation properly, in which case the PCI slot itself will be appear to be grouped with the device that is connected to it.<br />
{{bc|IOMMU group 1<br />
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)<br />
01:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 750] (rev a2)<br />
01:00.1 Audio device: NVIDIA Corporation Device 0fbc (rev a1)}}<br />
<br />
This is fine so long as only your guest GPU is included in here, such as above. Depending on what is plugged in your other PCIe slots and whether they are allocated to your CPU or your PCH, you may find yourself with additional devices within the same group, which would force you to pass those as well. If you are ok with passing everything that is in there to your VM, you are free to continue. Otherwise, you will either need to try and plug your GPU in your other PCIe slots (if you have any) and see if those provide isolation from the rest or to install the ACS override patch, which comes with its own drawbacks.<br />
<br />
{{note|If they are grouped with other devices in this manner, pci root ports and bridges should neither be bound to vfio at boot, nor be added to the VM.}}<br />
<br />
==Isolating the GPU==<br />
Due to their size and complexity, GPU drivers do not tend to support dynamic rebinding very well, so you cannot just have some GPU you use on the host be transparently passed to a VM without consequences. It is generally preferable to bind them with a placeholder driver instead. This will stop other drivers from attempting to claim it, and will force the GPU to remain inactive while a VM is not running. There are two methods for doing this, but it is recommended to use vfio-pci if your kernel supports it.<br />
<br />
{{warning|Once you reboot after this procedure, whatever GPU you have configured will no longer be usable on the host until you reverse the manipulation. Make sure the GPU you intend to use on the host is properly configured before doing this.}}<br />
<br />
===Using vfio-pci===<br />
Starting with Linux 4.1, the kernel includes vfio-pci, which is functionally similar to pci-stub with a few added bonuses, such as switching devices into their D3 state when they are not in use. If your system supports it, which you can try by running the following command, you should use it. If it returns en error, you will have to rely on pci-stub instead.<br />
<br />
{{hc|$ modinfo vfio-pci|<br />
filename: /lib/modules/4.4.5-1-ARCH/kernel/drivers/vfio/pci/vfio-pci.ko.gz<br />
description: VFIO PCI - User Level meta-driver<br />
author: Alex Williamson <alex.williamson@redhat.com><br />
...}}<br />
<br />
Vfio-pci normally targets PCI devices by ID, meaning you only need to specify the IDs of the devices you intend to passthrough. For the following IOMMU group, you would want to bind vfio-pci with {{ic|10de:13c2}} and {{ic|10de:0fbb}}, which will be used as example values for the rest of this section.<br />
<br />
{{bc|IOMMU group 13<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)}}<br />
<br />
{{note|If, as noted [[#Plugging_your_guest_GPU_in_an_unisolated_CPU-based_PCIe_slot|here]], your pci root port is part of your IOMMU group, you '''must not''' pass its ID to {{ic|vfio-pci}}, as it needs to remain attached to the host to function properly. Any other device within that group, however, should be left for {{ic|vfio-pci}} to bind with.}}<br />
<br />
You can then add those vendor-device ID pairs to the default parameters passed to vfio-pci whenever it is inserted into the kernel.<br />
{{hc|1=/etc/modprobe.d/vfio.conf|2=options vfio-pci ids=10de:13c2,10de:0fbb}}<br />
<br />
This, however, does not guarantee that vfio-pci will be loaded before other graphics drivers. To ensure that, we need to statically bind it in the kernel image by adding it anywhere in the MODULES list in mkinitpcio.conf, alongside with its dependencies.<br />
{{note|If you also have another driver loaded this way for [[Kernel_mode_setting#Early_KMS_start|early modesetting]] (such as "nouveau", "radeon", "amdgpu", "i915", etc.), all of the following VFIO modules must preceed it.}}<br />
{{hc|1=/etc/mkinitcpio.conf|2=MODULES="... vfio vfio_iommu_type1 vfio_pci vfio_virqfd ..."}}<br />
<br />
Also, ensure that the modconf hook is included in the HOOKS list of mkinitcpio.conf:<br />
<br />
{{hc|1=/etc/mkinitcpio.conf|2=HOOKS="... modconf ..."}}<br />
<br />
Since new modules have been added to the initramfs configuration, it must be regenerated. Should you change the IDs of the devices in {{ic|/etc/modprobe.d/vfio.conf}}, you will also have to regenerate it, as those parameters must be specified in the initramfs to be known during the early boot stages.<br />
{{bc|# mkinitcpio -p linux}}<br />
{{Note|If you are using a non-standard kernel, such as {{ic|linux-vfio}}, replace {{ic|linux}} with whichever kernel you intend to use.}}<br />
<br />
Reboot and verify that vfio-pci has loaded properly and that it is now bound to the right devices.<br />
{{hc|$ dmesg <nowiki>|</nowiki> grep -i vfio |<br />
[ 0.329224] VFIO - User Level meta-driver version: 0.3<br />
[ 0.341372] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000<br />
[ 0.354704] vfio_pci: add [10de:0fbb[ffff:ffff]] class 0x000000/00000000<br />
[ 2.061326] vfio-pci 0000:06:00.0: enabling device (0100 -> 0103)}}<br />
<br />
{{hc|$ lspci -nnk -d 10de:13c2|<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
Kernel driver in use: vfio-pci<br />
Kernel modules: nouveau nvidia}}<br />
<br />
{{hc|$ lspci -nnk -d 10de:0fbb|<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)<br />
Kernel driver in use: vfio-pci<br />
Kernel modules: snd_hda_intel}}<br />
<br />
===Using pci-stub (legacy method, pre-4.1 kernels)===<br />
If your kernel does not support vfio-pci, you can use the pci-stub module instead.<br />
<br />
Pci-stub normally targets PCI devices by ID, meaning you only need to specify the IDs of the devices you intend to passthrough. For the following IOMMU group, you would want to bind vfio-pci with {{ic|10de:13c2}} and {{ic|10de:0fbb}}, which will be used as example values for the rest of this section.<br />
<br />
{{bc|IOMMU group 13<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)}}<br />
<br />
Most linux distros (including Arch Linux) have pci-stub built statically within the kernel image. If for any reason it needs to be loaded as a module in your case, you will need to bind it yourself using whatever tool your distro provides for this, such as {{ic|mkinitpcio}} for Arch.<br />
{{hc|<nowiki>/etc/mkinitcpio.conf</nowiki>|<nowiki>MODULES="... pci-stub ..."</nowiki>}}<br />
<br />
If you did need to add this module to your kernel image configuration manually, you must also regenerate it.<br />
{{bc|# mkinitcpio -p linux}}<br />
{{Note|If you are using a non-standard kernel, such as {{ic|linux-vfio}}, replace {{ic|linux}} with whichever kernel you intend to use.}}<br />
<br />
Add the relevant PCI device IDs to the kernel command line:<br />
{{hc|<nowiki>/etc/mkinitcpio.conf</nowiki>|<nowiki>...<br />
GRUB_CMDLINE_LINUX_DEFAULT="... pci-stub.ids=10de:13c2,10de:0fbb ..."<br />
...</nowiki>}}<br />
{{note|If, as noted [[#Plugging_your_guest_GPU_in_an_unisolated_CPU-based_PCIe_slot|here]], your pci root port is part of your IOMMU group, you '''must not''' pass its ID to {{ic|pci-stub}}, as it needs to remain attached to the host to function properly. Any other device within that group, however, should be left for {{ic|pci-stub}} to bind with.}}<br />
<br />
Reload the grub configuration:<br />
{{bc|# grub-mkconfig -o /boot/grub/grub.cfg}}<br />
<br />
Check dmesg output for successful assignment of the device to pci-stub:<br />
{{hc|dmesg <nowiki>|</nowiki> grep pci-stub|<br />
[ 2.390128] pci-stub: add 10DE:13C2 sub<nowiki>=</nowiki>FFFFFFFF:FFFFFFFF cls<nowiki>=</nowiki>00000000/00000000<br />
[ 2.390143] pci-stub 0000:06:00.0: claimed by stub<br />
[ 2.390150] pci-stub: add 10DE:0FBB sub<nowiki>=</nowiki>FFFFFFFF:FFFFFFFF cls<nowiki>=</nowiki>00000000/00000000<br />
[ 2.390159] pci-stub 0000:06:00.1: claimed by stub}}<br />
<br />
===Gotchas===<br />
====Tainting your boot GPU====<br />
If you are passing through your boot GPU and you cannot change it, make sure you also add {{ic|video<nowiki>=</nowiki>efifb:off}} to your kernel command line so nothing gets sent to it before vfio-pci gets to bind with it.<br />
<br />
==Setting up an OVMF-based guest VM==<br />
OVMF is an open-source UEFI firmware for QEMU virtual machines. While it's possible to use SeaBIOS to get similar results to an actual PCI passthough, the setup process is different and it is generally preferable to use the EFI method if your hardware supports it.<br />
<br />
===Configuring libvirt===<br />
[[Libvirt]] is a wrapper for a number of virtualization utilities that greatly simplifies the configuration and deployment process of virtual machines. In the case of KVM and QEMU, the frontend it provides allows us to avoid dealing with the permissions for QEMU and make it easier to add and remove various devices on a live VM. Its status as a wrapper, however, means that it might not always support all of the latest qemu features, which could end up requiring the use of a wrapper script to provide some extra arguments to QEMU.<br />
<br />
After installing {{Pkg|qemu}}, {{Pkg|libvirt}}, {{AUR|ovmf-git}} and {{Pkg|virt-manager}}, add the path to your OVMF firmware image and runtime variables template to your libvirt config so {{ic|virt-install}} or {{ic|virt-manager}} can find those later on.<br />
<br />
{{hc|/etc/libvirt/qemu.conf|<br />
<nowiki>nvram = [</nowiki><br />
"/usr/share/ovmf/x64/ovmf_x64.bin:/usr/share/ovmf/x64/ovmf_vars_x64.bin"<br />
]<br />
}}<br />
<br />
You can now [[enable]] and start {{ic|libvirtd}} and its logging component.<br />
<br />
{{bc|<br />
# systemctl enable --now libvirtd<br />
# systemctl enable virtlogd.socket}}<br />
<br />
===Setting up the guest OS===<br />
The process of setting up a VM using {{ic|virt-manager}} is mostly self explainatory, as most of the process comes with fairly comprehensive on-screen instructions. However, you should pay special attention to the following steps :<br />
* When the VM creation wizard asks you to name your VM, check the "Customize before install" checkbox.<br />
* In the "Overview" section, set your firmware to "UEFI". If the option is grayed out, make sure that you have correctly specified the location of your firmware in {{ic|/etc/libvirt/qemu.conf}} and restart {{ic|libvirtd.service}}.<br />
* In the "Processor" section, change your CPU model to "host-passthrough". If it is not in the list, you will have to type it by hand. This will ensure that your CPU is detected properly, since it causes libvirt to expose your CPU capabilities exactly as they are instead of only those it recognizes (which is the preferred default behavior to make CPU behavior easier to reproduce). Without it, some applications may complain about your CPU being of an unknown model.<br />
* If you want to minimize IO overhead, go into "Add Hardware" and add a Controller for SCSI drives of the "VirtIO SCSI" model. You can then change the default IDE disk for a SCSI disk, which will bind to said controller.<br />
** Windows VMs will not recognize those drives by default, so you need to download the ISO containing the drivers from [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/ here] and add an IDE CD-ROM storage device linking to said ISO, otherwise you will not be able to get Windows to recognize it during the installation process. When prompted to select a disk to install windows on, load the drivers contained on the CD-ROM under ''vioscsi''.<br />
<br />
The rest of the installation process will take place as normal using a standard QXL video adapter running in a window. At this point, there is no need to install additional drivers for the rest of the virtual devices, since most of them will be removed later on. Once the guest OS is done installing, simply turn off the virtual machine.<br />
<br />
===Attaching the PCI devices===<br />
With the installation done, it's now possible to edit the hardware details in libvirt and remove virtual integration devices, such as the spice channel and virtual display, the QXL video adapter, the emulated mouse and keyboard and the USB tablet device. Since that leaves you with no input devices, you may want to bind a few USB host devices to your VM as well, but remember to '''leave at least one mouse and/or keyboard assigned to your host''' in case something goes wrong with the guest. At this point, it also becomes possible to attach the PCI device that was isolated earlier; simply click on "Add Hardware" and select the PCI Host Devices you want to passthrough. If everything went well, the screen plugged into your GPU should show the OVMF splash screen and your VM should start up normally. From there, you can setup the drivers for the rest of your VM.<br />
<br />
===Gotchas===<br />
====Using a non-EFI image on an OVMF-based VM====<br />
The OVMF firmware does not support booting off non-EFI mediums. If the installation process drops you in a UEFI shell right after booting, you may have an invalid EFI boot media. Try using an alternate linux/windows image to determine if you have an invalid media.<br />
<br />
==Performance tuning==<br />
Most use cases for PCI passthroughs relate to performance-intensive domains such as video games and GPU-accelerated tasks. While a PCI passthrough on its own is a step towards reaching native performance, there are still a few ajustments on the host and guest to get the most out of your VM.<br />
<br />
===CPU pinning===<br />
The default behavior for KVM guests is to run operations coming from the guest as a number of threads representing virtual processors. Those threads are managed by the Linux scheduler like any other thread and are dispatched to any available CPU cores based on niceness and priority queues. Since switching between threads adds a bit of overhead (because context switching forces the core to change its cache between operations), this can noticeably harm performance on the guest. CPU pinning aims to resolve this as it overrides process scheduling and ensures that the VM threads will always run and only run on those specific cores. Here, for instance, the guest cores 0, 1, 2 and 3 are mapped to the host cores 5, 6, 7 and 8 respectively.<br />
<br />
{{hc|<nowiki>EDITOR=nano virsh edit myPciPassthroughVm</nowiki>|...<br />
<nowiki><vcpu placement='static'>4</vcpu><br />
<cputune><br />
<vcpupin vcpu='0' cpuset='4'/><br />
<vcpupin vcpu='1' cpuset='5'/><br />
<vcpupin vcpu='2' cpuset='6'/><br />
<vcpupin vcpu='3' cpuset='7'/><br />
</cputune></nowiki><br />
...}}<br />
<br />
====The case of Hyper-threading====<br />
If your CPU supports hardware multitasking, also known as Hyper-threading on Intel chips, there are two ways you can go with your CPU pinning. That is, Hyper-threading is simply a very efficient way of running two threads on one CPU at any given time, so while it may give you 8 logical cores on what would otherwise be a quad-core CPU, if the physical core is overloaded, the logical core won't be of any use. One could pin their VM threads on 2 physical cores and their 2 respective threads, but any task overloading those two cores won't be helped by the extra two logical cores, since in the end you're only passing through two cores out of four, not four out of eight. What you should do knowing this depends on what you intend to do with your host while your VM is running.<br />
<br />
This is the abridged content of {{ic|/proc/cpuinfo}} on a quad-core machine with hyper-threading.<br />
{{hc|<nowiki>$ cat /proc/cpuinfo | grep -e "processor" -e "core id" -e "^$"</nowiki>|<br />
processor : 0<br />
core id : 0<br />
<br />
processor : 1<br />
core id : 1<br />
<br />
processor : 2<br />
core id : 2<br />
<br />
processor : 3<br />
core id : 3<br />
<br />
processor : 4<br />
core id : 0<br />
<br />
processor : 5<br />
core id : 1<br />
<br />
processor : 6<br />
core id : 2<br />
<br />
processor : 7<br />
core id : 3}}<br />
<br />
If you don't intend to be doing any computation-heavy work on the host (or even anything at all) at the same time as you would on the VM, it would probably be better to pin your VM threads across all of your logical cores, so that the VM can fully take advantage of the spare CPU time on all your cores.<br />
<br />
On the quad-core machine mentioned above, it would look like this :<br />
{{hc|<nowiki>EDITOR=nano virsh edit myPciPassthroughVm</nowiki>|...<br />
<nowiki><vcpu placement='static'>4</vcpu><br />
<cputune><br />
<vcpupin vcpu='0' cpuset='4'/><br />
<vcpupin vcpu='1' cpuset='5'/><br />
<vcpupin vcpu='2' cpuset='6'/><br />
<vcpupin vcpu='3' cpuset='7'/><br />
</cputune><br />
...<br />
<cpu mode='custom' match='exact'><br />
...<br />
<topology sockets='1' cores='4' threads='1'/><br />
...<br />
</cpu></nowiki><br />
...}}<br />
<br />
If you would instead prefer to have the host and guest running intensive tasks at the same time, it would then be preferable to pin a limited amount of physical cores and their respective threads on the guest and leave the rest to the host to avoid the two competing for CPU time.<br />
<br />
On the quad-core machine mentioned above, it would look like this :<br />
{{hc|<nowiki>EDITOR=nano virsh edit myPciPassthroughVm</nowiki>|...<br />
<nowiki><vcpu placement='static'>4</vcpu><br />
<cputune><br />
<vcpupin vcpu='0' cpuset='2'/><br />
<vcpupin vcpu='1' cpuset='3'/><br />
<vcpupin vcpu='2' cpuset='6'/><br />
<vcpupin vcpu='3' cpuset='7'/><br />
</cputune><br />
...<br />
<cpu mode='custom' match='exact'><br />
...<br />
<topology sockets='1' cores='2' threads='2'/><br />
...<br />
</cpu></nowiki><br />
...}}<br />
<br />
===Static huge pages===<br />
When dealing with applications that require large amounts of memory, memory latency can become a problem since the more memory pages are being used, the more likely it is that this application will attempt to access information accross multiple memory "pages", which is the base unit for memory allocation. Resolving the actual address of the memory page takes multiple steps, and so CPUs normally cache information on recently used memory pages to make subsequent uses on the same pages faster. Applications using large amounts of memory run into a problem where, for instance, a virtual machine uses 4GB of memory divided into 4kB pages (which is the default size for normal pages), meaning that such cache misses can become extremely frequent and greatly increase memory latency. Huge pages exist to mitigate this issue by giving larger individual pages to those applications, increasing the odds that multiple operations will target the same page in succession. This is normally handeled with transparent huge pages, which dynamically manages hugepages to keep up with the demand.<br />
<br />
On a VM with a PCI passthrough, however, it is '''not possible''' to benefit from transparent huge pages, as IOMMU requires that the guest's memory be allocated and pinned as soon as the VM starts. It is therefore required to allocate huge pages statically in order to benefit from them. <br />
<br />
{{warning|Do note that static huge pages lock down the allocated amount of memory, making it unavailable for applications that are not configured to use them. Allocating 4GBs worth of huge pages on a machine with 8GBs of memory will only leave you with 4GBs of available memory on the host '''even when the VM is not running'''.}}<br />
<br />
To allocate huge pages at boot, one must simply specify the desired amount on their kernel comand line with {{ic|<nowiki>hugepages=x</nowiki>}}. For instance, reserving 1024 pages with {{ic|<nowiki>hugepages=1024</nowiki>}} and the default size of 2048kB per huge page creates 2GBs worth of memory for the virtual machine to use.<br />
<br />
Also, since static huge pages can only be used by applications that specifically request it, you must add this section in your libvirt domain configuration to allow kvm to benefit from them :<br />
{{hc|<nowiki>EDITOR=nano virsh edit myPciPassthroughVm</nowiki>|...<br />
<memoryBacking><br />
<hugepages/><br />
</memoryBacking><br />
...}}<br />
<br />
== Complete example for QEMU (CLI-based) without libvirtd (can switch GPUs without reboot) ==<br />
{{Remove|Too specific, requires considerable efforts to work on other machines.|section=Removing_example_scripts}}<br />
<br />
This script starts Samba and Synergy, runs the VM and closes everything after the VM is shut down. Note that this method does '''not''' require libvirtd to be running or configured.<br />
<br />
Since this was posted, the author continued working on scripts to ease the workflow of switching GPUs. All of said scripts can be found on the author's GitLab instance: https://git.mel.vin/melvin/scripts/tree/master/qemu.<br />
<br />
With these new scripts, is it possible to switch GPUs without rebooting, only a restart of the X session is needed. This is all handled by a tiny shell script that runs in the tty. When you log in the tty, it will ask which card you would like to use if you autolaunch the shell script.<br />
<br />
[https://www.redhat.com/archives/vfio-users/2016-May/msg00187.html vfio-users : Full set of (runtime) scripts for VFIO + Qemu CLI]<br />
<br />
[https://www.redhat.com/archives/vfio-users/2015-August/msg00020.html vfio-users : Example configuration with CLI Qemu (working VM => host audio)]<br />
<br />
The script below is the main QEMU launcher as of 2016-05-16, all other scripts can be found in the repo.<br />
<br />
{{hc|slightly edited from "windows.sh" 2016-05-16 : https://git.mel.vin/melvin/scripts/tree/master/qemu|<nowiki>#!/bin/bash<br />
<br />
if [[ $EUID -ne 0 ]]<br />
then<br />
echo "This script must be run as root"<br />
exit 1<br />
fi<br />
<br />
echo "Starting Samba"<br />
systemctl start smbd.service<br />
systemctl start nmbd.service<br />
<br />
echo "Starting VM"<br />
export QEMU_AUDIO_DRV="pa"<br />
qemu-system-x86_64 \<br />
-serial none \<br />
-parallel none \<br />
-nodefaults \<br />
-nodefconfig \<br />
-no-user-config \<br />
-enable-kvm \<br />
-name Windows \<br />
-cpu host,kvm=off,hv_vapic,hv_time,hv_relaxed,hv_spinlocks=0x1fff,hv_vendor_id=sugoidesu \<br />
-smp sockets=1,cores=4,threads=1 \<br />
-m 8192 \<br />
-mem-path /dev/hugepages \<br />
-mem-prealloc \<br />
-soundhw hda \<br />
-device ich9-usb-uhci3,id=uhci \<br />
-device usb-ehci,id=ehci \<br />
-device nec-usb-xhci,id=xhci \<br />
-machine pc,accel=kvm,kernel_irqchip=on,mem-merge=off \<br />
-drive if=pflash,format=raw,file=./Windows_ovmf_x64.bin \<br />
-rtc base=localtime,clock=host,driftfix=none \<br />
-boot order=c \<br />
-net nic,vlan=0,macaddr=52:54:00:00:00:01,model=virtio,name=net0 \<br />
-net bridge,vlan=0,name=bridge0,br=br0 \<br />
-drive if=virtio,id=drive0,file=./Windows.img,format=raw,cache=none,aio=native \<br />
-nographic \<br />
-device vfio-pci,host=04:00.0,addr=09.0,multifunction=on \<br />
-device vfio-pci,host=04:00.1,addr=09.1 \<br />
-usbdevice host:046d:c29b `# Logitech G27` &<br />
<br />
# -usbdevice host:054c:05c4 `# Sony DualShock 4` \<br />
# -usbdevice host:28de:1142 `# Steam Controller` \<br />
<br />
sleep 5<br />
<br />
while [[ $(pgrep -x -u root qemu-system-x86) ]]<br />
do<br />
if [[ ! $(pgrep -x -u REGULAR_USER synergys) ]]<br />
then<br />
echo "Starting Synergy server"<br />
sudo -u REGULAR_USER /usr/bin/synergys --debug ERROR --no-daemon --enable-crypto --config /etc/synergy.conf &<br />
fi<br />
<br />
sleep 5<br />
done<br />
<br />
echo "VM stopped"<br />
<br />
echo "Stopping Synergy server"<br />
pkill -u REGULAR_USER synergys<br />
<br />
echo "Stopping Samba"<br />
systemctl stop smbd.service<br />
systemctl stop nmbd.service<br />
<br />
exit 0</nowiki>}}<br />
<br />
== Complete example for QEMU with libvirtd ==<br />
{{Remove|Too specific,only works on the contributor's machine.|section=Removing_example_scripts}}<br />
<br />
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'><br />
<name>win7</name><br />
<uuid>a3bf6450-d26b-4815-b564-b1c9b098a740</uuid><br />
<memory unit='KiB'>8388608</memory><br />
<currentMemory unit='KiB'>8388608</currentMemory><br />
<vcpu placement='static'>8</vcpu><br />
<os><br />
<type arch='x86_64' machine='pc-i440fx-2.4'>hvm</type><br />
<boot dev='hd'/><br />
<bootmenu enable='yes'/><br />
</os><br />
<features><br />
<acpi/><br />
<kvm><br />
<hidden state='on'/><br />
</kvm><br />
</features><br />
<cpu mode='host-passthrough'><br />
<topology sockets='1' cores='8' threads='1'/><br />
</cpu><br />
<clock offset='utc'/><br />
<on_poweroff>destroy</on_poweroff><br />
<on_reboot>restart</on_reboot><br />
<on_crash>destroy</on_crash><br />
<devices><br />
<emulator>/usr/sbin/qemu-system-x86_64</emulator><br />
<disk type='block' device='disk'><br />
<driver name='qemu' type='raw' cache='none' io='native'/><br />
<source dev='/dev/rootvg/win7'/><br />
<target dev='vda' bus='virtio'/><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/><br />
</disk><br />
<disk type='block' device='disk'><br />
<driver name='qemu' type='raw' cache='none' io='native'/><br />
<source dev='/dev/rootvg/windane'/><br />
<target dev='vdb' bus='virtio'/><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/><br />
</disk><br />
<disk type='block' device='cdrom'><br />
<driver name='qemu' type='raw' cache='none' io='native'/><br />
<target dev='hdb' bus='ide'/><br />
<readonly/><br />
<address type='drive' controller='0' bus='0' target='0' unit='1'/><br />
</disk><br />
<controller type='usb' index='0'><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/><br />
</controller><br />
<controller type='pci' index='0' model='pci-root'/><br />
<controller type='ide' index='0'><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/><br />
</controller><br />
<controller type='sata' index='0'><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/><br />
</controller><br />
<interface type='network'><br />
<mac address='52:54:00:fa:59:92'/><br />
<source network='default'/><br />
<model type='rtl8139'/><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/><br />
</interface><br />
<input type='mouse' bus='ps2'/><br />
<input type='keyboard' bus='ps2'/><br />
<sound model='ac97'><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/><br />
</sound><br />
<memballoon model='virtio'><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/><br />
</memballoon><br />
</devices><br />
<qemu:commandline><br />
<qemu:arg value='-device'/><br />
<qemu:arg value='vfio-pci,host=02:00.0,multifunction=on,x-vga=on'/><br />
<qemu:arg value='-device'/><br />
<qemu:arg value='vfio-pci,host=02:00.1'/><br />
<qemu:env name='QEMU_PA_SAMPLES' value='1024'/><br />
<qemu:env name='QEMU_AUDIO_DRV' value='pa'/><br />
<qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/><br />
</qemu:commandline><br />
</domain><br />
<br />
==Troubleshooting==<br />
==="Error 43 : Driver failed to load" on Nvidia GPUs passed to Windows VMs===<br />
{{Note|This may also fix SYSTEM_THREAD_EXCEPTION_NOT_HANDLED boot crashes related to Nvidia drivers}}<br />
<br />
Since version 337.88, Nvidia drivers on Windows check if an hypervisor is running and fail if it detects one, which results in an Error 43 in the Windows device manager. Starting with QEMU 2.5.0 and libvirt 1.3.3, the vendor_id for the hypervisor can be spoofed, which is enough to fool the Nvidia drivers into loading anyway. All one must do is add {{ic|<nowiki>hv_vendor_id=whatever</nowiki>}} to the cpu parameters in their QEMU command line, or by adding the following line to their libvirt domain configuration. It may help for the ID to be set to a 12-character alphanumeric (e.g. '123456789ab') as opposed to longer or shorter strings.<br />
<br />
{{hc|<nowiki>EDITOR=nano virsh edit myPciPassthroughVm</nowiki>|...<br />
<nowiki><br />
<features><br />
<hyperv><br />
...<br />
<nowiki><vendor_id state='on' value='whatever'/></nowiki><br />
...<br />
</hyperv><br />
...<br />
<kvm><br />
<nowiki><hidden state='on'/></nowiki><br />
</kvm><br />
</features>...<br />
</nowiki><br />
}}<br />
<br />
Users with older versions of QEMU and/or libvirt will instead have to disable a few hypervisor extensions, which can degrade performance substentially. If this is what you want to do, do the following replacement in your libvirt domain config file.<br />
{{hc|<nowiki>EDITOR=nano virsh edit myPciPassthroughVm</nowiki>|...<br />
<nowiki><features><br />
<hyperv><br />
<relaxed state='on'/><br />
<vapic state='on'/><br />
<spinlocks state='on' retries='8191'/><br />
</hyperv><br />
...<br />
</features><br />
...<br />
<clock offset='localtime'><br />
<timer name='hypervclock' present='yes'/><br />
</clock></nowiki><br />
...}}<br />
{{bc|<nowiki>...<br />
<br />
<clock offset='localtime'><br />
<timer name='hypervclock' present='no'/><br />
</clock><br />
...<br />
<features><br />
<kvm><br />
<hidden state='on'/><br />
</kvm><br />
...<br />
<hyperv><br />
<relaxed state='off'/><br />
<vapic state='off'/><br />
<spinlocks state='off'/><br />
</hyperv><br />
...<br />
</features><br />
...</nowiki>}}<br />
<br />
As of the latest (currently 368.39 as of adding this) NVIDIA driver, using <hidden state='on'> is required, which does [https://patchwork.ozlabs.org/patch/355005/] have the effect of disabling the hypervisor extensions for the guest. To counter this, I have attempted to write a windows side patch to the driver[https://github.com/sk1080/nvidia-kvm-patcher], for those who understand the risks of enabling testsigning or otherwise bypassing driver signature enforcement.<br />
<br />
===Unexpected crashes related to CPU exceptions===<br />
In some cases, kvm may react strangely to certain CPU operations, such as GeForce Experience complaining about an unsupported CPU being present or some game crashing for unknown reasons. A number of those issues can be solved by passing the {{ic|1=ignore_msrs=1}} option to the KVM module, which will ignore unimplemented MSRs instead of returning an error value.<br />
<br />
{{hc|<nowiki>/etc/modprobe.d/kvm.conf</nowiki>|<nowiki>...<br />
options kvm ignore_msrs=1<br />
...</nowiki>}}<br />
<br />
{{Warning|While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the VM or other VMs.}}<br />
<br />
==="System Thread Exception Not Handled" when booting on a Windows VM===<br />
Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to {{ic|core2duo}}.<br />
<br />
===Slowed down audio pumped through HDMI on the video card===<br />
For some users VM's audio slows down/starts stuttering/becomes demonic after a while when it's pumped through HDMI on the video card. This usually also slows down graphics.<br />
A possible solution consists of enabling MSI (Message Signaled-Based Interrupts) instead of the default (Line-Based Interrupts).<br />
<br />
In order to check whether MSI is supported or enabled, run the following command as root:<br />
# lspci -vs $device | grep 'MSI:'<br />
where `$device` is the card's address (e.g. `01:00.0`).<br />
<br />
The output should be similar to:<br />
Capabilities: [60] MSI: Enable'''-''' Count=1/1 Maskable- 64bit+<br />
<br />
A {{ic|-}} after {{ic|Enabled}} means MSI is supported, but not used by the VM, while a {{ic|+}} says that the VM is using it.<br />
<br />
The procedure to enable it is quite complex, instructions and an overview of the setting can be found [http://forums.guru3d.com/showthread.php?t=378044 here].<br />
<br />
Other hints can be found on the [http://lime-technology.com/wiki/index.php/UnRAID_6/VM_Guest_Support#Enable_MSI_for_Interrupts_to_Fix_HDMI_Audio_Support lime-technology's wiki], or on this article on [http://vfio.blogspot.it/2014/09/vfio-interrupts-and-how-to-coax-windows.html VFIO tips and tricks].<br />
<br />
Some tools named {{ic|MSI_util}} or similar are available on the Internet, but they didn't work for me on Windows 10 64bit.<br />
<br />
In order to fix the issues enabling MSI on the 0 function of my nVidia card ({{ic|01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1) (prog-if 00 [VGA controller])}}) was not enough; I also enabled it on the other function ({{ic|01:00.1 Audio device: NVIDIA Corporation Device 0fba (rev a1)}}) and that seems to have fixed the issue.<br />
<br />
==Passing though other devices==<br />
===USB controller===<br />
If your motherboard has multiple USB controllers mapped to multiple groups, it is possible to pass those instead of USB devices. Passing an actual controller over an individual USB device provides the following advantages : <br />
<br />
* If a device disconnects or changes ID over the course of an given operation (such as a phone undergoing an update), the VM will not suddenly stop seeing it.<br />
* Any USB port managed by this controller is directly handled by the VM and can have its devices unplugged, replugged and changed without having to notify the hypervisor.<br />
* Libvirt will not complain if one of the USB devices you usually pass to the guest is missing when starting the VM.<br />
<br />
Unlike with GPUs, drivers for most USB controllers do not require any specific configuration to work on a VM and control can normally be passed back and forth between the host and guest systems with no side effects.<br />
<br />
You can find out which PCI devices correspond to which controller and how various ports and devices are assigned to each one of them using this command :<br />
<br />
{{hc|$ <nowiki>for usb_ctrl in $(find /sys/bus/usb/devices/usb* -maxdepth 0 -type l); do pci_path="$(dirname "$(realpath "${usb_ctrl}")")"; echo "Bus $(cat "${usb_ctrl}/busnum") --> $(basename $pci_path) (IOMMU group $(basename $(realpath $pci_path/iommu_group)))"; lsusb -s "$(cat "${usb_ctrl}/busnum"):"; echo; done</nowiki>|<br />
Bus 1 --> 0000:00:1a.0 (IOMMU group 4)<br />
Bus 001 Device 004: ID 04f2:b217 Chicony Electronics Co., Ltd Lenovo Integrated Camera (0.3MP)<br />
Bus 001 Device 007: ID 0a5c:21e6 Broadcom Corp. BCM20702 Bluetooth 4.0 [ThinkPad]<br />
Bus 001 Device 008: ID 0781:5530 SanDisk Corp. Cruzer<br />
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub<br />
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub<br />
<br />
Bus 2 --> 0000:00:1d.0 (IOMMU group 9)<br />
Bus 002 Device 006: ID 0451:e012 Texas Instruments, Inc. TI-Nspire Calculator<br />
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub<br />
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub}}<br />
<br />
This laptop has 3 USB ports managed by 2 USB controllers, each with their own IOMMU group. In this example, Bus 001 manages a single USB port (with a SanDisk USB pendrive plugged into it so it appears on the list), but also a number of internal devices, such as the internal webcam and the bluetooth card. Bus 002, on the other hand, does not apprear to manage anything except for the calculator that is plugged into it. The third port is empty, which is why it does not show up on the list, but is actually managed by Bus 002.<br />
<br />
Once you have identified which controller manages which ports by plugging various devices into them and decided which one you want to passthrough, simply add it to the list of PCI host devices controlled by the VM in your guest configuration. No other configuration should be needed.<br />
<br />
===Gotchas===<br />
====Passing through a device that does not support resetting====<br />
When the VM shuts down, all devices used by the guest are deinitialized by its OS in preparation for shutdown. In this state, those devices are no longer functionnal and must then be power-cycled before they can resume normal operation. Linux can handle this power-cycling on its own, but when a device has no known reset methods, it remains in this disabled state and becomes unavailable. Since Libvirt and Qemu both expect all host PCI devices to be ready to reattach to the host before completely stopping the VM, when encountering a device that won't reset, they will hang in a "Shutting down" state where they will not be able to be restarted until the host system has been rebooted. It is therefore reccomanded to only pass through PCI devices which the kernel is able to reset, as evidenced by the presence of a {{ic|reset}} file in the PCI device sysfs node, such as {{ic|/sys/bus/pci/devices/0000:00:1a.0/reset}}.<br />
<br />
The following bash command, based on the one used to list IOMMU groups, shows which devices can and cannot be reset.<br />
<br />
{{hc|<nowiki>for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d);do echo "IOMMU group $(basename "$iommu_group")"; for device in $(ls -1 "$iommu_group"/devices/); do if [[ -e "$iommu_group"/devices/"$device"/reset ]]; then echo -n "[RESET]"; fi; echo -n $'\t';lspci -nns "$device"; done; done</nowiki>|<br />
IOMMU group 0<br />
00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v2/Ivy Bridge DRAM Controller [8086:0158] (rev 09)<br />
IOMMU group 1<br />
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09)<br />
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 720] [10de:1288] (rev a1)<br />
01:00.1 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)<br />
IOMMU group 2<br />
00:14.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:1e31] (rev 04)<br />
IOMMU group 4<br />
[RESET] 00:1a.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:1e2d] (rev 04)<br />
IOMMU group 5<br />
[RESET] 00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04)<br />
IOMMU group 10<br />
[RESET] 00:1d.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:1e26] (rev 04)<br />
IOMMU group 13<br />
06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)<br />
}}<br />
<br />
This signals that the xHCI USB controller in 00:14.0 cannot be reset and will therefore stop the VM from shutting down properly, while the integrated sound card in 00:1b.0 and the other two controllers in 00:1a.0 amd 00:1d.0 do not share this problem and can be passed without issue.<br />
<br />
==Additional Information==<br />
===ACS Override Patch===<br />
If you find your PCI devices grouped among others that you do not wish to pass through, you may be able to seperate them using Alex Williamson's ACS override patch. Make sure you understand [http://vfio.blogspot.com/2014/08/iommu-groups-inside-and-out.html the potential risk] of doing so. <br />
<br />
You will need a kernel with the patch applied. The easiest method to acquiring this is through the {{AUR|linux-vfio}} package. <br />
<br />
In addition, the ACS override patch needs to be enabled with kernel command line options. The patch file adds the following documentation:<br />
<br />
pcie_acs_override =<br />
[PCIE] Override missing PCIe ACS support for:<br />
downstream<br />
All downstream ports - full ACS capabilties<br />
multifunction<br />
All multifunction devices - multifunction ACS subset<br />
id:nnnn:nnnn<br />
Specfic device - full ACS capabilities<br />
Specified as vid:did (vendor/device ID) in hex<br />
<br />
The option {{ic|pcie_acs_override<nowiki>=</nowiki>downstream}} is typically sufficient.<br />
<br />
After installation and configuration, reconfigure your [[Kernel parameters|bootloader kernel parameters]] to load the new kernel with the {{ic|pcie_acs_override<nowiki>=</nowiki>}} option enabled.<br />
<br />
== See also ==<br />
* [https://bbs.archlinux.org/viewtopic.php?id=162768 Discussion on Arch Linux forums] | [https://archive.is/kZYMt Archived link]<br />
* [https://docs.google.com/spreadsheet/ccc?key=0Aryg5nO-kBebdFozaW9tUWdVd2VHM0lvck95TUlpMlE User contributed hardware compatibility list]<br />
* [http://pastebin.com/rcnUZCv7 Example script from https://www.youtube.com/watch?v=37D2bRsthfI]<br />
* [http://vfio.blogspot.com/ Complete tutorial for PCI passthrough]<br />
* [https://www.redhat.com/archives/vfio-users/ VFIO users mailing list]<br />
* [https://webchat.freenode.net/?channels=vfio-users #vfio-users on freenode]</div>Peoro