https://wiki.archlinux.org/api.php?action=feedcontributions&user=Af&feedformat=atomArchWiki - User contributions [en]2024-03-28T20:36:45ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Scanner_Button_Daemon&diff=796284Scanner Button Daemon2024-01-07T10:19:31Z<p>Af: updated AUR link to the only available package scanbd-git</p>
<hr />
<div>[[Category:Digital imaging]]<br />
[[ja:Scanner Button Daemon]]<br />
The majority of the desktop scanners are more or less "passive" devices: They might function with a suitable application but are unable to be used by buttons only.<br />
<br />
[https://gitlab.com/sane-project/frontend/scanbd scanbd] tries to solve the problem with managing such scanners to make use of the scanner-buttons they have ''(only when the buttons are supported by sane)''.<br />
<br />
== How does it work? ==<br />
<br />
''scanbd'' (the scanner button daemon) opens and polls the scanner and therefore locks the device. So no other application can access the device directly (open the /dev/..., or via libusb, etc).<br />
<br />
To solve this, a second daemon is used (in the so called "manager-mode" of ''scanbd''): ''scanbm'' is configured as a "proxy" to access the scanner and, if another application tries to use the scanner, the polling daemon is ordered to disable polling for the time the other scan-application wants to use the scanner.<br />
<br />
To make this happen, ''scanbm'' is configured instead of [[SANE#Sharing your scanner over a network|saned]] as the network scanning daemon. If a scan request arrives on the sane-port, ''scanbm'' stops the polling by sending a dbus-signal to the polling ''scanbd-daemon''. Then it starts the real ''saned'' which scans and sends the data back to the requesting application. Afterwards the scanbd-manager ''scanbm'' restarts the polling by sending another dbus-signal to ''scanbd''.<br />
<br />
Due to the above, the set up of the ''scanbd'' requires changes in default configuration of [[SANE]] and also definition of own action scripts (defining what should be done when a button on the scanner is pressed).<br />
<br />
There are also alternatives to ''scanbd'', eg. [http://scanbuttond.sourceforge.net/ scanbuttond], however these seem to be unmaintained nowadays.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{AUR|scanbd-git}} package.<br />
<br />
=== Sane configuration ===<br />
<br />
Since ''scanbd'' and ''saned'' are running on the same machine as the scanner is connected to, we need to have two sets of saned configurations - one in the default location ({{ic|/etc/sane.d/}}), which would redirect local applications to a network socket, that systemd is listening on, and another one (e.g. {{ic|/etc/scanbd/sane.d/}}), which will be actually used by sane backend to access the attached scanner.<br />
<br />
First, copy all configuration files from {{ic|/etc/sane.d/}} to {{ic|/etc/scanbd/sane.d/}} (these will be needed later):<br />
<br />
# cp -r /etc/sane.d/* /etc/scanbd/sane.d/<br />
<br />
Modify {{ic|/etc/sane.d/dll.conf}} so that it includes ''only'' the "net" directive (either delete the other directives (printers), or comment them with # symbol):<br />
<br />
{{hc|/etc/sane.d/dll.conf|<br />
net}}<br />
<br />
Modify the net-backend configuration file (see scanbd's README.txt for more complicated setups):<br />
<br />
{{hc|/etc/sane.d/net.conf|2=<br />
connect_timeout = 3<br />
localhost # scanbm is listening on localhost}}<br />
<br />
Now the desktop applications (which use libsane) are forced (by the above dll.conf) to use the net-backend only. This prevents them from using the locally attached scanners directly (and blocking them).<br />
<br />
Whenever there is a connection to the standard sane network socket, systemd starts ''scanbm'' ("manager mode" of scanbd), which in turn tells (the already running) ''scanbd'' to stop polling the scanner and then it starts ''saned'' with the alternative configuration directory.<br />
<br />
The last step is to modify the alternative configuration of sane in {{ic|/etc/scanbd/sane.d/dll.conf}}: just make sure that the "net" directive is commented and the corresponding scanner-backends are uncommented:<br />
<br />
{{hc|/etc/scanbd/sane.d/dll.conf|<br />
#net<br />
pixma<br />
epson2<br />
#... whatever other scanner backend needed ...}}<br />
<br />
Now it is time to [[start/enable]] {{ic|scanbd.service}} and [[start]] {{ic|scanbm.socket}}. <br />
<br />
You can check the {{ic|scanbd.service}} and {{ic|scanbm.socket}} [[unit status]] to see if the scanbd service and scanbm socket were started. To increase debugging verbosity, change {{ic|1=debug-level = 7}} in {{ic|/etc/scanbd/scanbd.conf}} and restart the scanbd service.<br />
<br />
If keeping the default user ''daemon'' and group ''scanner'' in {{ic|/etc/scanbd/scanbd.conf}}, make sure you've added ''daemon'' to the ''scanner'' group, otherwise ''scanbm'' won't work.<br />
<br />
=== scanbd configuration ===<br />
<br />
If you are lucky, your scanner might work almost out of the box and you would only want to modify the action scripts, which define what is done when a particular button is pressed.<br />
<br />
scanbm listens to scanner's status and on the basis of messages received, it decides what to do. The standard behaviour is defined in {{ic|/etc/scanbd/scanbd.conf}}. E.g. the action scan:<br />
<br />
action scan {<br />
filter = "^scan.*"<br />
numerical-trigger {<br />
from-value = 1<br />
to-value = 0<br />
}<br />
desc = "Scan to file"<br />
script = "test.script"<br />
}<br />
<br />
Whenever the message from the scanner includes word "scan" (see reg-exp for more details on filters) and the value changes from 1 to 0, then it runs script {{ic|/etc/scanbd/test.script}}.<br />
<br />
{{ic|/etc/scanbd/test.script}} does not do anything but sends a message to syslog:<br />
<br />
{{hc|/etc/scanbd/test.script|<br />
#!/bin/bash<br />
# look in scanbd.conf for environment variables<br />
<br />
logger -t "scanbd: $0" "Begin of $SCANBD_ACTION for device $SCANBD_DEVICE"<br />
<br />
# printout all env-variables<br />
/usr/bin/printenv > /tmp/scanbd.script.env<br />
<br />
logger -t "scanbd: $0" "End of $SCANBD_ACTION for device $SCANBD_DEVICE"}}<br />
<br />
There are a few other scripts available in {{ic|/etc/scanbd/}} that actually do something - have a look yourself.<br />
<br />
Also, {{ic|/etc/scanbd/scanbd.conf}} has "include" directives at the end, which refer to preconfigured button definitions of a few printers.<br />
<br />
$ cat /etc/scanbd/scanbd.conf | grep include\(<br />
# include("scanner.d/myscanner.conf")<br />
# include("/my/long/path/myscanner.conf")<br />
include(scanner.d/avision.conf)<br />
include(scanner.d/fujitsu.conf)<br />
include(scanner.d/hp.conf)<br />
include(scanner.d/pixma.conf)<br />
include(scanner.d/snapscan.conf)</div>Afhttps://wiki.archlinux.org/index.php?title=Libvirt&diff=649280Libvirt2021-01-19T16:48:13Z<p>Af: removed trailing whitespaces to fix line breaks in "virt-install" examples. minor typo.</p>
<hr />
<div>{{DISPLAYTITLE:libvirt}}<br />
[[Category:Virtualization]]<br />
[[ja:libvirt]]<br />
[[zh-hans:Libvirt]]<br />
[[zh-hant:Libvirt]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|:PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
Libvirt is collection of software that provides a convenient way to manage virtual machines and other virtualization functionality, such as storage and network interface management. These software pieces include a long term stable C API, a daemon (libvirtd), and a command line utility (virsh). A primary goal of libvirt is to provide a single way to manage multiple different virtualization providers/hypervisors, such as the [[QEMU|KVM/QEMU]], [[Xen]], [[LXC]], [http://openvz.org OpenVZ] or [[VirtualBox]] [[:Category:Hypervisors|hypervisors]] ([http://libvirt.org/drivers.html among others]).<br />
<br />
Some of the major libvirt features are:<br />
*'''VM management''': Various domain lifecycle operations such as start, stop, pause, save, restore, and migrate. Hotplug operations for many device types including disk and network interfaces, memory, and CPUs.<br />
*'''Remote machine support''': All libvirt functionality is accessible on any machine running the libvirt daemon, including remote machines. A variety of network transports are supported for connecting remotely, with the simplest being SSH, which requires no extra explicit configuration.<br />
*'''Storage management''': Any host running the libvirt daemon can be used to manage various types of storage: create file images of various formats (qcow2, vmdk, raw, ...), mount NFS shares, enumerate existing LVM volume groups, create new LVM volume groups and logical volumes, partition raw disk devices, mount iSCSI shares, and much more.<br />
*'''Network interface management''': Any host running the libvirt daemon can be used to manage physical and logical network interfaces. Enumerate existing interfaces, as well as configure (and create) interfaces, bridges, vlans, and bond devices.<br />
*'''Virtual NAT and Route based networking''': Any host running the libvirt daemon can manage and create virtual networks. Libvirt virtual networks use firewall rules to act as a router, providing VMs transparent access to the host machines network.<br />
<br />
== Installation ==<br />
<br />
Because of its daemon/client architecture, libvirt needs only be installed on the machine which will host the virtualized system. Note that the server and client can be the same physical machine.<br />
<br />
=== Server ===<br />
<br />
[[Install]] the {{pkg|libvirt}} package, as well as at least one hypervisor:<br />
<br />
* The [http://libvirt.org/drvqemu.html libvirt KVM/QEMU driver] is the primary ''libvirt'' driver and if [[QEMU#Enabling_KVM|KVM is enabled]], fully virtualized, hardware accelerated guests will be available. See the [[QEMU]] article for more information.<br />
<br />
* Other [http://libvirt.org/drivers.html supported hypervisors] include [[LXC]], [[VirtualBox]] and [[Xen]]. See the respective articles for installation instructions. With respect to {{ic|libvirtd}} installation note:<br />
** The [http://libvirt.org/drvlxc.html libvirt LXC driver] has no dependency on the [[LXC]] userspace tools provided by {{Pkg|lxc}}, therefore there is no need to install the package if planning on using the driver.<br />
** [[Xen]] support is available, but not by default. You need to use the [[ABS]] to modify {{Pkg|libvirt}}'s [[PKGBUILD]] and build it without the {{ic|--without-xen}} option. As VirtualBox in turn has no planned stable support for Xen, you might as well replace it with {{ic|--without-vbox}}.<br />
<br />
For network connectivity, install:<br />
<br />
* {{Pkg|ebtables}}, and {{Pkg|dnsmasq}} for the [http://wiki.libvirt.org/page/VirtualNetworking#The_default_configuration default] NAT/DHCP networking.<br />
* {{Pkg|bridge-utils}} for bridged networking.<br />
* {{Pkg|openbsd-netcat}} for remote management over [[SSH]].<br />
<br />
{{Note|If you are using [[firewalld]], as of {{ic|libvirt}} 5.1.0 and [[firewalld]] 0.7.0 you no longer need to change the firewall backend to [[iptables]]. {{ic|libvirt}} now installs a zone called 'libvirt' in [[firewalld]] and manages its required network rules there. [https://libvirt.org/firewall.html Firewall and network filtering in libvirt]}}<br />
<br />
=== Client ===<br />
<br />
The client is the user interface that will be used to manage and access the virtual machines.<br />
<br />
* {{App|virsh|Command line program for managing and configuring domains.|https://libvirt.org/|{{Pkg|libvirt}}}}<br />
* {{App|[[Wikipedia:GNOME Boxes|GNOME Boxes]]|Simple GNOME 3 application to access remote or virtual systems.|https://wiki.gnome.org/Apps/Boxes|{{Pkg|gnome-boxes}}}}<br />
* {{App|Libvirt Sandbox|Application sandbox toolkit.|https://sandbox.libvirt.org/|{{AUR|libvirt-sandbox}}}}<br />
* {{App|Remote Viewer|Simple remote display client.|https://virt-manager.org/|{{Pkg|virt-viewer}}}}<br />
* {{App|Qt VirtManager|Qt application for managing virtual machines.|https://github.com/F1ash/qt-virt-manager|{{AUR|qt-virt-manager}}}}<br />
* {{App|[[Wikipedia:Virtual Machine Manager|Virtual Machine Manager]]|Graphically manage KVM, Xen, or LXC via libvirt.|https://virt-manager.org/|{{Pkg|virt-manager}}}}<br />
<br />
A list of libvirt-compatible software can be found [http://libvirt.org/apps.html here].<br />
<br />
== Configuration ==<br />
<br />
For '''''system'''''-level administration (i.e. global settings and image-''volume'' location), libvirt minimally requires [[#Set up authentication|setting up authorization]], and [[#Daemon|starting the daemon]].<br />
<br />
{{Note|For user-'''''session''''' administration, daemon setup and configuration is ''not'' required; authorization, however, is limited to local abilities; the front-end will launch a local instance of the '''libvirtd''' daemon.}}<br />
<br />
=== Set up authentication ===<br />
<br />
From [http://libvirt.org/auth.html#ACL_server_config libvirt: Connection authentication]:<br />
:The libvirt daemon allows the administrator to choose the authentication mechanisms used for client connections on each network socket independently. This is primarily controlled via the libvirt daemon master config file in {{ic|/etc/libvirt/libvirtd.conf}}. Each of the libvirt sockets can have its authentication mechanism configured independently. There is currently a choice of {{ic|none}}, {{ic|polkit}} and {{ic|sasl}}.<br />
<br />
Because {{Pkg|libvirt}} pulls {{Pkg|polkit}} as a dependency during installation, [[#Using polkit|polkit]] is used as the default value for the {{ic|unix_sock_auth}} parameter ([http://libvirt.org/auth.html#ACL_server_polkit source]). [[#Authenticate with file-based permissions|File-based permissions]] remain nevertheless available.<br />
<br />
==== Using polkit ====<br />
{{Note|A system reboot may be required before authenticating with {{ic|polkit}} works correctly.}}<br />
<br />
The ''libvirt'' daemon provides two [[Polkit#Actions|polkit actions]] in {{ic|/usr/share/polkit-1/actions/org.libvirt.unix.policy}}:<br />
<br />
* {{ic|org.libvirt.unix.manage}} for full management access (RW daemon socket), and<br />
* {{ic|org.libvirt.unix.monitor}} for monitoring only access (read-only socket).<br />
<br />
The default policy for the RW daemon socket will require to authenticate as an admin. This is akin to [[sudo]] auth, but does not require that the client application ultimately run as root. Default policy will still allow any application to connect to the RO socket.<br />
<br />
Arch defaults to consider anybody in the {{ic|wheel}} group as an administrator: this is defined in {{ic|/usr/share/polkit-1/rules.d/50-default.rules}} (see [[Polkit#Administrator identities]]). Therefore there is no need to create a new group and rule file '''if your user is a member of the {{ic|wheel}} group''': upon connection to the RW socket (e.g. via {{Pkg|virt-manager}}) you will be prompted for your user's password.<br />
<br />
{{Note|Prompting for a password relies on the presence of an [[Polkit#Authentication_agents|authentication agent]] on the system. Console users may face an issue with the default {{ic|pkttyagent}} agent which may or may not work properly.}}<br />
<br />
{{Tip|If you want to configure passwordless authentication, see [[Polkit#Bypass password prompt]].}}<br />
<br />
As of libvirt 1.2.16 (commit:[http://libvirt.org/git/?p=libvirt.git;a=commit;h=e94979e901517af9fdde358d7b7c92cc055dd50c]), members of the {{ic|libvirt}} group have passwordless access to the RW daemon socket by default. The easiest way to ensure your user has access is to ensure the libvirt group exists and they are a member of it.<br />
<br />
You may change the group authorized to access the RW daemon socket. As an example, to authorize the {{ic|kvm}} group, create the following file:<br />
<br />
{{hc|/etc/polkit-1/rules.d/50-libvirt.rules|<nowiki><br />
/* Allow users in kvm group to manage the libvirt<br />
daemon without authentication */<br />
polkit.addRule(function(action, subject) {<br />
if (action.id == "org.libvirt.unix.manage" &&<br />
subject.isInGroup("kvm")) {<br />
return polkit.Result.YES;<br />
}<br />
});</nowiki><br />
}}<br />
<br />
Then [[Users_and_groups#Other_examples_of_user_management|add yourself]] to the {{ic|kvm}} group and relogin. Replace ''kvm'' with any group of your preference just make sure it exists and that your user is a member of it (see [[Users and groups]] for more information).<br />
<br />
Do not forget to relogin for group changes to take effect.<br />
<br />
==== Authenticate with file-based permissions ====<br />
<br />
To define file-based permissions for users in the ''libvirt'' group to manage virtual machines, uncomment and define:<br />
<br />
{{hc|/etc/libvirt/libvirtd.conf|<nowiki><br />
#unix_sock_group = "libvirt"<br />
#unix_sock_ro_perms = "0777" # set to 0770 to deny non-group libvirt users<br />
#unix_sock_rw_perms = "0770"<br />
#auth_unix_ro = "none"<br />
#auth_unix_rw = "none"<br />
</nowiki>}}<br />
<br />
While some guides mention changed permissions of certain libvirt directories to ease management, keep in mind permissions are lost on package update. To edit these system directories, root user is expected.<br />
<br />
=== Daemon ===<br />
<br />
[[Start]] both {{ic|libvirtd.service}} and {{ic|virtlogd.service}}. Optionally [[enable]] {{ic|libvirtd.service}} (which will also enable {{ic|virtlogd.socket}} and {{ic|virtlockd.socket}} [[Systemd#Using_units|units]], so there is NO need to also enable {{ic|virtlogd.service}}).<br />
<br />
=== Unencrypt TCP/IP sockets ===<br />
<br />
{{Warning|This method is used to help remote domain, connection speed for trusted networks. This is the least secure connection method. This should ''only'' be used for testing or use over a secure, private, and trusted network. SASL is not enabled here, so all TCP traffic is ''cleartext''. For real world use ''always'' enable SASL.}}<br />
<br />
Edit {{ic|/etc/libvirt/libvirtd.conf}}:<br />
{{hc|/etc/libvirt/libvirtd.conf|<nowiki><br />
listen_tls = 0<br />
listen_tcp = 1<br />
auth_tcp="none"<br />
</nowiki>}}<br />
<br />
It is also necessary to start the server in listening mode by editing {{ic|/etc/conf.d/libvirtd}}:<br />
<br />
{{hc|/etc/conf.d/libvirtd|2=LIBVIRTD_ARGS="--listen"}}<br />
<br />
=== Access virtual machines using their hostnames ===<br />
<br />
For host access to guests on non-isolated, bridged networks, enable the {{ic|libvirt}} NSS module provided by {{Pkg|libvirt}}.<br />
<br />
Edit {{ic|/etc/nsswitch.conf}}:<br />
{{hc|/etc/nsswitch.conf|<nowiki><br />
hosts: files libvirt libvirt_guest dns myhostname<br />
</nowiki>}}<br />
<br />
{{Note|While commands such as {{ic|ping}} and {{ic|ssh}} should work with virtual machine hostnames, commands such as {{ic|host}} and {{ic|nslookup}} may fail or produce unexpected results because they rely on DNS. Use {{ic|getent hosts <vm-hostname>}} instead.}}<br />
<br />
== Test ==<br />
<br />
To test if libvirt is working properly on a ''system'' level:<br />
<br />
$ virsh -c qemu:///system<br />
<br />
To test if libvirt is working properly for a user-''session'':<br />
<br />
$ virsh -c qemu:///session<br />
<br />
== Management ==<br />
<br />
Libvirt management is done mostly with three tools: {{Pkg|virt-manager}} (GUI), {{ic|virsh}}, and {{ic|guestfish}} (which is part of {{Pkg|libguestfs}}).<br />
<br />
=== virsh ===<br />
<br />
The virsh program is for managing guest ''domains'' (virtual machines) and works well for scripting, virtualization administration. Though most virsh commands require root privileges to run due to the communication channels used to talk to the hypervisor, typical management, creation, and running of domains (like that done with VirtualBox) can be done as a regular user.<br />
<br />
Virsh includes an interactive terminal that can be entered if no commands are passed (options are allowed though): {{ic|virsh}}. The interactive terminal has support for tab completion.<br />
<br />
From the command line:<br />
<br />
$ virsh [option] <command> [argument]...<br />
<br />
From the interactive terminal:<br />
<br />
virsh # <command> [argument]...<br />
<br />
Help is available:<br />
<br />
$ virsh help [option*] or [group-keyword*]<br />
<br />
=== Storage pools ===<br />
<br />
A pool is a location where storage ''volumes'' can be kept. What libvirt defines as ''volumes'' others may define as "virtual disks" or "virtual machine images". Pool locations may be a directory, a network filesystem, or partition (this includes a [[LVM]]). Pools can be toggled active or inactive and allocated for space.<br />
<br />
On the ''system''-level, {{ic|/var/lib/libvirt/images/}} will be activated by default; on a user-''session'', {{ic|virt-manager}} creates {{ic|$HOME/VirtualMachines}}.<br />
<br />
Print active and inactive storage pools:<br />
<br />
$ virsh pool-list --all<br />
<br />
==== Create a new pool using virsh ====<br />
<br />
If one wanted to ''add'' a storage pool, here are examples of the command form, adding a directory, and adding a LVM volume:<br />
<br />
$ virsh pool-define-as name type [source-host] [source-path] [source-dev] [source-name] [<target>] [--source-format format]<br />
$ virsh pool-define-as ''poolname'' dir - - - - /home/''username''/.local/libvirt/images<br />
$ virsh pool-define-as ''poolname'' fs - - ''/dev/vg0/images'' - ''mntpoint''<br />
<br />
The above command defines the information for the pool, to build it:<br />
<br />
$ virsh pool-build ''poolname''<br />
$ virsh pool-start ''poolname''<br />
$ virsh pool-autostart ''poolname''<br />
<br />
To remove it:<br />
<br />
$ virsh pool-undefine ''poolname''<br />
<br />
{{Tip|For LVM storage pools:<br />
* It is a good practice to dedicate a volume group to the storage pool only.<br />
* Choose a LVM volume group that differs from the pool name, otherwise when the storage pool is deleted the LVM group will be too.<br />
}}<br />
<br />
==== Create a new pool using virt-manager ====<br />
<br />
First, connect to a hypervisor (e.g. QEMU/KVM ''system'', or user-''session''). Then, right-click on a connection and select ''Details''; select the ''Storage'' tab, push the ''+'' button on the lower-left, and follow the wizard.<br />
<br />
=== Storage volumes ===<br />
<br />
Once the pool has been created, volumes can be created inside the pool. ''If building a new domain (virtual machine), this step can be skipped as a volume can be created in the domain creation process.''<br />
<br />
==== Create a new volume with virsh ====<br />
<br />
Create volume, list volumes, resize, and delete:<br />
$ virsh vol-create-as ''poolname'' ''volumename'' 10GiB --format aw|bochs|raw|qcow|qcow2|vmdk<br />
$ virsh vol-upload --pool ''poolname'' ''volumename'' ''volumepath''<br />
$ virsh vol-list ''poolname''<br />
$ virsh vol-resize --pool ''poolname'' ''volumename'' 12GiB<br />
$ virsh vol-delete --pool ''poolname'' ''volumename''<br />
$ virsh vol-dumpxml --pool ''poolname'' ''volumename'' # for details.<br />
<br />
==== virt-manager backing store type bug ====<br />
<br />
{{out of date|This bug was fixed upstream in 2016. Numerous libvirt releases have occurred since then.}}<br />
<br />
On newer versions of {{ic|virt-manager}} you can now specify a backing store to use when creating a new disk. This is very useful, in that you can have new domains be based on base images saving you both time and disk space when provisioning new virtual systems. There is a bug (https://bugzilla.redhat.com/show_bug.cgi?id=1235406) in the current version of {{ic|virt-manager}} which causes {{ic|virt-manager}} to choose the wrong type of the backing image in the case where the backing image is a {{ic|qcow2}} type. In this case, it will errantly pick the backing type as {{ic|raw}}. This will cause the new image to be unable to read from the backing store, and effectively remove the utility of having a backing store at all.<br />
<br />
There is a workaround for this issue. {{ic|qemu-img}} has long been able to do this operation directly. If you wish to have a backing store for your new domain before this bug is fixed, you may use the following command.<br />
<br />
$ qemu-img create -f qcow2 -o backing_file=<path to backing image>,backing_fmt=qcow2 <disk name> <disk size><br />
<br />
Then you can use this image as the base for your new domain and it will use the backing store as a COW volume saving you time and disk space.<br />
<br />
=== Domains ===<br />
<br />
Virtual machines are called ''domains''. If working from the command line, use {{ic|virsh}} to list, create, pause, shutdown domains, etc. {{ic|virt-viewer}} can be used to view domains started with {{ic|virsh}}. Creation of domains is typically done either graphically with {{ic|virt-manager}} or with {{ic|virt-install}} (a command line program installed as part of the {{pkg|virt-install}} package).<br />
<br />
Creating a new domain typically involves using some installation media, such as an {{ic|.iso}} from the storage pool or an optical drive.<br />
<br />
Print active and inactive domains:<br />
<br />
# virsh list --all<br />
<br />
{{note|[[SELinux]] has a built-in exemption for libvirt that allows volumes in {{ic|/var/lib/libvirt/images/}} to be accessed. If using SELinux and there are issues with the volumes, ensure that volumes are in that directory, or ensure that other storage pools are correctly labeled.}}<br />
<br />
==== Create a new domain using virt-install ====<br />
<br />
{{Accuracy|{{ic|/usr/share/libosinfo}} isn't provided by any official packages, including {{Pkg|libosinfo}}.|section=Where_is_'/usr/share/libosinfo/db/oses/os.xml'?}}<br />
<br />
For an extremely detailed domain (virtual machine) setup, it is easier to [[#Create a new domain using virt-manager]]. However, basics can easily be done with {{ic|virt-install}} and still run quite well. Minimum specifications are {{ic|--name}}, {{ic|--memory}}, guest storage ({{ic|--disk}}, {{ic|--filesystem}}, or {{ic|--nodisks}}), and an install method (generally an {{ic|.iso}} or CD). See {{man|1|virt-install}} for more details and information about unlisted options.<br />
<br />
Arch Linux install (two GiB, qcow2 format volume create; user-networking):<br />
<br />
$ virt-install \<br />
--name arch-linux_testing \<br />
--memory 1024 \<br />
--vcpus=2,maxvcpus=4 \<br />
--cpu host \<br />
--cdrom $HOME/Downloads/arch-linux_install.iso \<br />
--disk size=2,format=qcow2 \<br />
--network user \<br />
--virt-type kvm<br />
<br />
Fedora testing (Xen hypervisor, non-default pool, do not originally view):<br />
<br />
$ virt-install \<br />
--connect xen:/// \<br />
--name fedora-testing \<br />
--memory 2048 \<br />
--vcpus=2 \<br />
--cpu=host \<br />
--cdrom /tmp/fedora20_x84-64.iso \<br />
--os-type=linux --os-variant=fedora20 \<br />
--disk pool=testing,size=4 \<br />
--network bridge=br0 \<br />
--graphics=vnc \<br />
--noautoconsole<br />
$ virt-viewer --connect xen:/// fedora-testing<br />
<br />
Windows:<br />
<br />
$ virt-install \<br />
--name=windows7 \<br />
--memory 2048 \<br />
--cdrom /dev/sr0 \<br />
--os-variant=win7 \<br />
--disk /mnt/storage/domains/windows7.qcow2,size=20GiB \<br />
--network network=vm-net \<br />
--graphics spice<br />
<br />
{{Tip|Run {{ic|1=osinfo-query --fields=name,short-id,version os}} to get argument for {{ic|--os-variant}}; this will help define some specifications for the domain. However, {{ic|--memory}} and {{ic|--disk}} will need to be entered; one can look within the appropriate {{ic|/usr/share/libosinfo/db/oses/''os''.xml}} if needing these specifications. After installing, it will likely be preferable to install the [http://www.spice-space.org/download.html Spice Guest Tools] that include the [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/form-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Para_virtualized_drivers-Mounting_the_image_with_virt_manager.html VirtIO drivers]. For a Windows VirtIO network driver there is also {{Aur|virtio-win}}. These drivers are referenced by a {{ic|1=<model type='virtio' />}} in the guest's {{ic|.xml}} configuration section for the device. A bit more information can also be found on the [[QEMU#Preparing_a_Windows_guest|QEMU article]].}}<br />
<br />
Import existing volume:<br />
<br />
$ virt-install \<br />
--name demo \<br />
--memory 512 \<br />
--disk /home/user/VMs/mydisk.img \<br />
--import<br />
<br />
==== Create a new domain using virt-manager ====<br />
<br />
First, connect to the hypervisor (e.g. QEMU/KVM ''system'' or user ''session''), right click on a connection and select ''New'', and follow the wizard.<br />
<br />
* On the ''fourth step'', de-selecting ''Allocate entire disk now'' will make setup quicker and can save disk space in the interum; ''however'', it may cause volume fragmentation over time.<br />
* On the ''fifth step'', open ''Advanced options'' and make sure that ''Virt Type'' is set to ''kvm'' (this is usually the preferred method). If additional hardware setup is required, select the ''Customize configuration before install'' option.<br />
<br />
==== Manage a domain ====<br />
<br />
Start a domain:<br />
<br />
$ virsh start ''domain''<br />
$ virt-viewer --connect qemu:///session ''domain''<br />
<br />
Gracefully attempt to shutdown a domain; force off a domain:<br />
<br />
$ virsh shutdown ''domain''<br />
$ virsh destroy ''domain''<br />
<br />
Autostart domain on libvirtd start:<br />
<br />
$ virsh autostart ''domain''<br />
$ virsh autostart ''domain'' --disable<br />
<br />
Shutdown domain on host shutdown:<br />
<br />
: Running domains can be automatically suspended/shutdown at host shutdown using the {{ic|libvirt-guests.service}} systemd service. This same service will resume/startup the suspended/shutdown domain automatically at host startup. Read {{ic|/etc/conf.d/libvirt-guests}} for service options.<br />
<br />
Edit a domain's XML configuration:<br />
<br />
$ virsh edit ''domain''<br />
<br />
{{note|Virtual Machines started directly by QEMU are not manageable by libvirt tools.}}<br />
<br />
=== Networks ===<br />
<br />
A [https://jamielinux.com/docs/libvirt-networking-handbook/ decent overview of libvirt networking].<br />
<br />
Four network types exist that can be created to connect a domain to:<br />
<br />
* bridge — a virtual device; shares data directly with a physical interface. Use this if the host has ''static'' networking, it does not need to connect other domains, the domain requires full inbound and outbound trafficking, and the domain is running on a ''system''-level. See [[Network bridge]] on how to add a bridge. After creation, it needs to be specified in the respective guest's {{ic|.xml}} configuration file.<br />
* network — a virtual network; has ability to share with other domains. Libvirt offers many virtual network modes, such as NAT mode (Network address translation), routed mode and isolated mode. Using a virtual network is particularly indicated if the host has ''dynamic'' networking (e.g. NetworkManager), or using wireless.<br />
* macvtap — connect directly to a host physical interface.<br />
* user — local ability networking. Use this only for a user ''session''.<br />
<br />
{{ic|virsh}} has the ability to create networking with numerous options for most users, however, it is easier to create network connectivity with a graphic user interface (like {{ic|virt-manager}}), or to do so on [[#Create a new domain using virt-install|creation with virt-install]].<br />
<br />
{{note|libvirt handles DHCP and DNS with {{pkg|dnsmasq}}, launching a separate instance for every virtual network. It also adds iptables rules for proper routing, and enables the {{ic|ip_forward}} kernel parameter. This also means that having dnsmasq running on the host system is not necessary to support libvirt requirements (and could interfere with libvirt dnsmasq instances).}}<br />
<br />
You could get VM ip address via (in case it connected to {{ic|''default''}} network and receives IP address via dhcp):<br />
<br />
$ virsh net-dhcp-leases default<br />
<br />
command (replacing {{ic|''default''}} with network name VM connected to), or if VM has {{ic|''qemu-guest-agent''}} running via:<br />
<br />
$ virsh domifaddr --source agent $vm<br />
<br />
replacing {{ic|''$vm''}} with actual virtual machine name (or domain id).<br />
<br />
==== IPv6 ====<br />
<br />
When adding an IPv6 address through any of the configuration tools, you will likely receive the following error:<br />
Check the host setup: enabling IPv6 forwarding with RA routes without accept_ra set to 2 is likely to cause routes loss. Interfaces to look at: ''eth0''<br />
<br />
Fix this by running the following command (replace {{ic|''eth0''}} with the name of your physical interface):<br />
<br />
# sysctl net.ipv6.conf.eth0.accept_ra=2<br />
<br />
=== Snapshots ===<br />
<br />
Snapshots take the disk, memory, and device state of a domain at a point-of-time, and save it for future use. They have many uses, from saving a "clean" copy of an OS image to saving a domain's state before a potentially destructive operation. Snapshots are identified with a unique name.<br />
<br />
Snapshots are saved within the volume itself and the volume must be the format: qcow2 or raw. Snapshots use deltas in order not to take as much space as a full copy would.<br />
<br />
==== Create a snapshot ====<br />
<br />
{{Out of date|Some of this data appears to be dated.}}<br />
<br />
Once a snapshot is taken it is saved as a new block device and the original snapshot is taken offline. Snapshots can be chosen from and also merged into another (even without shutting down the domain).<br />
<br />
Print a running domain's volumes (running domains can be printed with {{ic|virsh list}}):<br />
<br />
{{hc|# virsh domblklist ''domain''|<nowiki><br />
Target Source<br />
------------------------------------------------<br />
vda /vms/domain.img<br />
</nowiki>}}<br />
<br />
To see a volume's physical properties:<br />
<br />
{{hc|# qemu-img info /vms/domain.img|<nowiki><br />
image: /vms/domain.img<br />
file format: qcow2<br />
virtual size: 50G (53687091200 bytes)<br />
disk size: 2.1G<br />
cluster_size: 65536<br />
</nowiki>}}<br />
<br />
Create a disk-only snapshot (the option {{ic|--atomic}} will prevent the volume from being modified if snapshot creation fails):<br />
<br />
# virsh snapshot-create-as ''domain'' snapshot1 --disk-only --atomic<br />
<br />
List snapshots:<br />
<br />
{{hc|# virsh snapshot-list ''domain''|<nowiki><br />
Name Creation Time State<br />
------------------------------------------------------------<br />
snapshot1 2012-10-21 17:12:57 -0700 disk-snapshot<br />
</nowiki>}}<br />
<br />
One can they copy the original image with {{ic|1=cp --sparse=true}} or {{ic|rsync -S}} and then merge the the original back into snapshot:<br />
<br />
# virsh blockpull --domain ''domain'' --path /vms/''domain''.snapshot1<br />
<br />
{{ic|domain.snapshot1}} becomes a new volume. After this is done the original volume ({{ic|domain.img}} and snapshot metadata can be deleted. The {{ic|virsh blockcommit}} would work opposite to {{ic|blockpull}} but it seems to be currently under development (including {{ic|snapshot-revert feature}}, scheduled to be released sometime next year.<br />
<br />
=== Other management ===<br />
<br />
Connect to non-default hypervisor:<br />
<br />
$ virsh --connect xen:///<br />
virsh # uri<br />
xen:///<br />
<br />
Connect to the QEMU hypervisor over SSH; and the same with logging:<br />
<br />
$ virsh --connect qemu+ssh://''username''@''host''/system<br />
$ LIBVIRT_DEBUG=1 virsh --connect qemu+ssh://''username''@''host''/system<br />
<br />
Connect a graphic console over SSH:<br />
<br />
$ virt-viewer --connect qemu+ssh://''username''@''host''/system ''domain''<br />
$ virt-manager --connect qemu+ssh://''username''@''host''/system ''domain''<br />
<br />
{{Note|If you are having problems connecting to a remote RHEL server (or anything other than Arch, really), try the two workarounds mentioned in {{bug|30748}} and {{bug|22068}}.}}<br />
<br />
Connect to the VirtualBox hypervisor (''VirtualBox support in libvirt is not stable yet and may cause libvirtd to crash''):<br />
<br />
$ virsh --connect vbox:///system<br />
<br />
Network configurations:<br />
<br />
$ virsh -c qemu:///system net-list --all<br />
$ virsh -c qemu:///system net-dumpxml default<br />
<br />
== Sharing data between host and guest ==<br />
<br />
=== Virtio-FS ===<br />
<br />
The description here will use hugepages to enable the usage of shared folders. [https://libvirt.org/kbase/virtiofs.html Sharing files with Virtio-FS] lists an overview of the supported options to enable filesharing with the guest.<br />
<br />
First you need to [[KVM#Enabling huge pages|enable hugepages]] which are used by the virtual machine:<br />
<br />
{{hc|/etc/sysctl.d/40-hugepage.conf|2=<br />
vm.nr_hugepages = ''nr_hugepages''<br />
}}<br />
<br />
To determine the number of hugepages needed check the size of the hugepages:<br />
<br />
$ grep Hugepagesize /proc/meminfo<br />
<br />
The number of hugepages is ''memory size of virtual machine / Hugepagesize''. Add to this value some additional pages. You have to reboot after this step, so that the hugepages are allocated.<br />
<br />
Now you have to prepare the configuration of the virtual machine:<br />
<br />
{{hc|# virsh edit <name of virtual machine>|<nowiki><br />
<domain><br />
...<br />
<memoryBacking><br />
<hugepages/><br />
</memoryBacking><br />
...<br />
<cpu ...><br />
<numa><br />
<cell memory='memory size of virtual machine' unit='KiB' memAccess='shared'/><br />
</numa><br />
</cpu><br />
...<br />
<filesystem type='mount' accessmode='passthrough'><br />
<driver type='virtiofs'/><br />
<source dir='path to source folder on host'/><br />
<target dir='mount_tag'/><br />
</filesystem><br />
...<br />
</devices><br />
</domain><br />
</nowiki>}}<br />
It is necessary to add the NUMA definition so that the memory access can be declared as shared. id and cpus values for NUMA will be inserted by virsh.<br />
<br />
It should now be possible to mount the folder in the shared machine:<br />
<br />
# mount -t virtiofs ''mount_tag'' /mnt/mount/path<br />
<br />
Add the following [[fstab]] entry to mount the folder automatically at boot:<br />
<br />
{{hc|/etc/fstab|2=<br />
...<br />
''mount_tag'' /mnt/mount/path virtiofs rw,noatime,_netdev 0 0<br />
}}<br />
<br />
=== 9p ===<br />
<br />
File system directories can be shared using the [[Wikipedia:9P (protocol)|9P protocol]]. Details are available in [https://wiki.qemu.org/Documentation/9psetup QEMU's documentation of 9psetup].<br />
<br />
Configure the virtual machine as follows:<br />
<br />
{{bc|1=<br />
<domain><br />
...<br />
<devices><br />
...<br />
<filesystem type="mount" accessmode="mapped"><br />
<source dir="''/path/on/host''"/><br />
<target dir="''mount_tag''"/><br />
</filesystem><br />
</devices><br />
</domain><br />
}}<br />
<br />
Boot the guest and [[mount]] the shared directory from it using:<br />
<br />
# mount -t 9p -o trans=virtio,version=9p2000.L ''mount_tag'' ''/path/to/mount_point/on/guest''<br />
<br />
See https://www.kernel.org/doc/html/latest/filesystems/9p.html for more mount options.<br />
<br />
To mount it at boot, add it to the guest's [[fstab]]:<br />
<br />
{{hc|/etc/fstab|2=<br />
...<br />
''mount_tag'' ''/path/to/mount_point/on/guest'' 9p trans=virtio,version=9p2000.L 0 0<br />
}}<br />
<br />
The module for the 9p transport (i.e. {{ic|9pnet_virtio}} for {{ic|1=trans=virtio}}) will not be automatically loaded, so mounting the file system from {{ic|/etc/fstab}} will fail and you will encounter an error like {{ic|9pnet: Could not find request transport: virtio}}. The solution is to [[Kernel module#Automatic module loading with systemd|preload the module durring boot]]:<br />
<br />
{{hc|/etc/modules-load.d/9pnet_virtio.conf|<br />
9pnet_virtio<br />
}}<br />
<br />
== Python connectivity code ==<br />
<br />
The {{Pkg|libvirt-python}} package provides a Python API in {{ic|/usr/lib/python3.x/site-packages/libvirt.py}}.<br />
<br />
General examples are given in {{ic|/usr/share/doc/libvirt-python-''your_libvirt_version''/examples/}}<br />
<br />
Unofficial example using {{Pkg|qemu}} and {{Pkg|openssh}}:<br />
<br />
{{bc|<nowiki><br />
#! /usr/bin/env python3<br />
import socket<br />
import sys<br />
import libvirt<br />
<br />
conn = libvirt.open("qemu+ssh://xxx/system")<br />
print("Trying to find node on xxx")<br />
domains = conn.listDomainsID()<br />
for domainID in domains:<br />
domConnect = conn.lookupByID(domainID)<br />
if domConnect.name() == 'xxx-node':<br />
print("Found shared node on xxx with ID {}".format(domainID))<br />
domServ = domConnect<br />
break<br />
</nowiki>}}<br />
<br />
== UEFI Support ==<br />
<br />
Libvirt can support UEFI virtual machines through QEMU and [https://github.com/tianocore/edk2 OVMF].<br />
<br />
Install the {{Pkg|edk2-ovmf}} package.<br />
<br />
[[Restart]] {{ic|libvirtd}}.<br />
<br />
Now you are ready to create a UEFI virtual machine. Create a new virtual machine through {{Pkg|virt-manager}}. When you get to the final page of the 'New VM' wizard, do the following:<br />
<br />
* Click 'Customize before install', then select 'Finish'<br />
* On the 'Overview' screen, Change the 'Firmware' field to select the 'UEFI x86_64' option.<br />
* Click 'Begin Installation'<br />
* The boot screen you'll see should use linuxefi commands to boot the installer, and you should be able to run efibootmgr inside that system, to verify that you're running an UEFI OS.<br />
<br />
For more information about this, refer to [https://fedoraproject.org/wiki/Using_UEFI_with_QEMU this fedora wiki page].<br />
<br />
== PulseAudio ==<br />
<br />
The PulseAudio daemon normally runs under your regular user account, and will only accept connections from the same user. This can be a problem if QEMU is being run as root through [[libvirt]]. To run QEMU as a regular user, edit {{ic|/etc/libvirt/qemu.conf}} and set the {{ic|user}} option to your username.<br />
<br />
user = "dave"<br />
<br />
You will also need to tell QEMU to use the PulseAudio backend and identify the server to connect to. First add the qemu namespace to you domain.<br />
<br />
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'{{Dead link|2020|12|27|status=404}}><br />
<br />
Then add the following section to your domain configuration using {{ic|virsh edit}}.<br />
<br />
<qemu:commandline><br />
<qemu:env name='QEMU_AUDIO_DRV' value='pa'/><br />
<qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/><br />
</qemu:commandline><br />
<br />
{{ic|1000}} is your user id. Change it if necessary.<br />
<br />
== Hypervisor CPU use ==<br />
Default VM configuration generated by virt-manager may cause rather high (10-20%) CPU use caused by the QEMU process.<br />
If you plan to run the VM in headless mode, consider removing some of the unnecessary devices.<br />
<br />
== See also ==<br />
<br />
* [http://libvirt.org/drvqemu.html Official libvirt web site]<br />
* [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/index.html Red Hat Virtualization Deployment and Administration Guide]<br />
* [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/index.html Red Hat Virtualization Tuning and Optimization Guide]<br />
* [http://docs.slackware.com/howtos:general_admin:kvm_libvirt Slackware KVM and libvirt]<br />
* [http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatkvm.htm IBM KVM]<br />
* [https://jamielinux.com/docs/libvirt-networking-handbook/ libvirt Networking Handbook]</div>Afhttps://wiki.archlinux.org/index.php?title=User:Af&diff=637726User:Af2020-10-12T06:14:42Z<p>Af: Created page with "Hi!"</p>
<hr />
<div>Hi!</div>Afhttps://wiki.archlinux.org/index.php?title=Bspwm&diff=637725Bspwm2020-10-12T06:14:17Z<p>Af: Use "install" instead of copy for the example installation to prevent having to create the dirs AND to make sure bspwmrc gets installed executable, since this seems to be a common pitfall for beginners.</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Tiling WMs]]<br />
[[ja:Bspwm]]<br />
[[ru:Bspwm]]<br />
[[es:Bspwm]]<br />
[[pt:Bspwm]]<br />
{{Related articles start}}<br />
{{Related|Window manager}}<br />
{{Related|Comparison of tiling window managers}}<br />
{{Related articles end}}<br />
<br />
[https://github.com/baskerville/bspwm bspwm] is a tiling window manager that represents windows as the leaves of a full binary tree. bspwm supports multiple monitors and is configured and controlled through messages. [https://specifications.freedesktop.org/wm-spec/wm-spec-latest.html EWMH] is partially supported.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|bspwm}} package or {{AUR|bspwm-git}} for the development version.<br />
<br />
== Starting ==<br />
<br />
Run {{ic|bspwm}} using [[xinit]].<br />
<br />
== Configuration ==<br />
<br />
The example configuration is located in {{ic|/usr/share/doc/bspwm/examples/}}.<br />
<br />
Copy/install {{ic|bspwmrc}} from there into {{ic|~/.config/bspwm/}} and {{ic|sxhkdrc}} into {{ic|~/.config/sxhkd/}}.<br />
<br />
The file {{ic|bspwmrc}} needs to be executable since the default example is simply a shell script that in turn<br />
configures bspwm via the {{ic|bspc}} command.<br />
<br />
$ install -Dm755 /usr/share/doc/bspwm/examples/bspwmrc ~/.config/bspwm/bspwmrc<br />
$ install -Dm644 /usr/share/doc/bspwm/examples/sxhkdrc ~/.config/sxhkd/sxhkdrc<br />
<br />
These two files are where you will be setting wm settings and keybindings, respectively.<br />
<br />
See the {{man|1|bspwm}} and {{man|1|sxhkd}} manuals for detailed documentation.<br />
<br />
==== Note for multi-monitor setups ====<br />
<br />
The example bspwmrc configures ten desktops on one monitor like this:<br />
<br />
bspc monitor -d I II III IV V VI VII VIII IX X<br />
<br />
You will need to change this line and add one for each monitor, similar to this:<br />
<br />
bspc monitor DVI-I-1 -d I II III IV<br />
bspc monitor DVI-I-2 -d V VI VII<br />
bspc monitor DP-1 -d VIII IX X<br />
<br />
You can use {{ic|xrandr -q}} or {{ic|bspc query -M}} to find the monitor names.<br />
<br />
The total number of desktops were maintained at ten in the above example. This is so that each desktop can still be addressed with {{ic|super + {1-9,0&#125;}} in the sxhkdrc.<br />
<br />
=== Rules ===<br />
<br />
There are two ways to set window rules (as of [https://github.com/baskerville/bspwm/commit/cd97a3290aa8d36346deb706fa307f5f8faa2f34 cd97a32]).<br />
<br />
The first is by using the built in rule command, as shown in the example bspwmrc:<br />
<br />
{{bc|<nowiki><br />
bspc rule -a Gimp desktop=^8 follow=on state=floating<br />
bspc rule -a Chromium desktop=^2<br />
bspc rule -a mplayer2 state=floating<br />
bspc rule -a Kupfer.py focus=on<br />
bspc rule -a Screenkey manage=off<br />
</nowiki>}}<br />
<br />
The second option is to use an external rule command. This is more complex, but can allow you to craft more complex window rules. See [https://github.com/baskerville/bspwm/tree/master/examples/external_rules these examples] for a sample rule command.<br />
<br />
If a particular window does not seem to be behaving according to your rules, check the class name of the program. This can be accomplished by running {{ ic|xprop {{!}} grep WM_CLASS}} to make sure you're using the proper string, which requires the {{pkg|xorg-xprop}} package.<br />
<br />
=== Panels ===<br />
<br />
==== Using lemonbar ====<br />
<br />
An example panel for {{AUR|lemonbar-git}} is provided in the examples folder on the GitHub page. You might also get some insights from the [[lemonbar]] wiki page. The panel will be executed by placing {{ic|panel &}} in your bspwmrc. Check the [[optdepends]] in the {{Pkg|bspwm}} package for dependencies that may be required.<br />
<br />
To display system information on your status bar you can use various system calls. This example will show you how to edit your {{ic|panel}} to get the volume status on your BAR:<br />
<br />
{{bc|<nowiki><br />
panel_volume()<br />
{<br />
volStatus=$(amixer get Master | tail -n 1 | cut -d '[' -f 4 | sed 's/].*//g')<br />
volLevel=$(amixer get Master | tail -n 1 | cut -d '[' -f 2 | sed 's/%.*//g')<br />
# is alsa muted or not muted?<br />
if [ "$volStatus" == "on" ]<br />
then<br />
echo "%{Fyellowgreen} $volLevel %{F-}"<br />
else<br />
# If it is muted, make the font red<br />
echo "%{Findianred} $volLevel %{F-}"<br />
fi<br />
}</nowiki>}}<br />
<br />
Next, we will have to make sure it is called and redirected to {{ic|$PANEL_FIFO}}:<br />
<br />
{{bc|<nowiki><br />
while true; do<br />
echo "S" "$(panel_volume) $(panel_clock)" > "$PANEL_FIFO"<br />
sleep 1s<br />
done &<br />
</nowiki>}}<br />
<br />
==== Using yabar ====<br />
<br />
Using the example panel using lemonbar requires you to set your environment (.profile), and make sure the panel scripts are on your path. Easier panel to set up is {{AUR|yabar}}, which has just one config file.<br />
<br />
==== Using polybar ====<br />
<br />
[[Polybar]] can be used by adding {{ic|polybar ''example'' &}} to your bspwmrc configuration file, where {{ic|''example''}} is the name of the bar.<br />
<br />
=== Scratchpad ===<br />
<br />
==== Using pid ====<br />
You can emulate a dropdown terminal (like i3's scratchpad feature if you put a terminal in it) using bspwm's window flags. Append the following to the end of the bspwm config file (adapt to your own terminal emulator):<br />
<br />
{{bc|<nowiki><br />
bspc rule -a scratchpad sticky=on state=floating hidden=on<br />
# check scratchpad already running<br />
[ "$(ps -x | grep -c 'scratchpad')" -eq "1" ] && st -c scratchpad -e ~/bin/scratch &<br />
</nowiki>}}<br />
<br />
The {{ic|sticky}} flag ensures that the window is always present on the current desktop.<br />
And {{ic|~/bin/scratch}} is:<br />
<br />
{{bc|<nowiki><br />
#!/usr/bin/sh<br />
# only add floating scratchpad window node id to /tmp/scratchid<br />
bspc query -N -n .floating | xargs -i sh -c 'bspc query --node {} -T | grep -q scratchpad && echo {} > /tmp/scratchid'<br />
exec $SHELL<br />
</nowiki>}}<br />
<br />
The hotkey for toggling the scratchpad should be bound to:<br />
<br />
{{bc|<nowiki><br />
id=$(cat /tmp/scratchid);\<br />
bspc node $id --flag hidden;bspc node -f $id<br />
</nowiki>}}<br />
<br />
==== Using class name ====<br />
<br />
In this example we are going to use ''termite'' with a custom class name as our dropdown terminal. It does not have to be ''termite''.<br />
<br />
First create a file in your path with the following content and make it executable. In this example let's call it {{ic|scratchpad.sh}}:<br />
<br />
{{bc|<nowiki><br />
#!/usr/bin/bash<br />
<br />
if [ -z $1 ]; then<br />
echo "Usage: $0 <name of hidden scratchpad window>"<br />
exit 1<br />
fi<br />
<br />
pids=$(xdotool search --class ${1})<br />
for pid in $pids; do<br />
echo "Toggle $pid"<br />
bspc node $pid --flag hidden -f<br />
done<br />
</nowiki>}}<br />
<br />
Then add this to your bspwm config.<br />
<br />
{{bc|<nowiki><br />
...<br />
bspc rule -a dropdown sticky=on state=floating hidden=on<br />
termite --class dropdown -e "zsh -i" &<br />
...<br />
</nowiki>}}<br />
<br />
To toggle the window a custom rule in [[sxhkd]] is necessary. Give as parameter the custom class name.<br />
<br />
{{bc|<nowiki><br />
super + u<br />
scratchpad.sh dropdown<br />
</nowiki>}}<br />
<br />
==== Other ==== <br />
For a scratch-pad which can use any window type without pre-defined rules, see: [https://www.reddit.com/r/bspwm/comments/3xnwdf/i3_like_scratch_for_any_window_possible/cy6i585]<br />
<br />
For a more sophisticated scratchpad script that supports many terminals out of the box and has flags for doing things like optionally starting a tmuxinator/tmux session, turning any window into a scratchpad on the fly, and automatically resizing a scratchpad to fit the current monitor see {{AUR|tdrop-git}}.<br />
<br />
=== Different monitor configurations for different machines ===<br />
<br />
Since the {{ic|bspwmrc}} is a shell script, it allows you to do things like these:<br />
<br />
#! /bin/sh<br />
<nowiki><br />
if [[ $(hostname) == 'myhost' ]]; then<br />
bspc monitor eDP1 -d I II III IV V VI VII VIII IX X<br />
elif [[ $(hostname) == 'otherhost' ]]; then<br />
bspc monitor VGA-0 -d I II III IV V<br />
bspc monitor VGA-1 -d VI VII VIII IX X<br />
elif [[ $(hostname) == 'yetanotherhost' ]]; then<br />
bspc monitor DVI-I-3 -d VI VII VIII IX X<br />
bspc monitor DVI-I-2 -d I II III IV V<br />
fi<br />
</nowiki><br />
<br />
{{Note|{{Pkg|inetutils}} is required to use hostname command.}}<br />
<br />
=== Set up a desktop where all windows are floating ===<br />
<br />
Here is how to setup the desktop 3 to have only floating windows. It can be useful for GIMP or other apps with multiple windows.<br />
<br />
Put this script somewhere in your {{ic|$PATH}} and call it from {{ic|.xinitrc}} or similar (with a {{ic|&}} at the end):<br />
<br />
#!/bin/bash<br />
<nowiki><br />
# change the desktop number here<br />
FLOATING_DESKTOP_ID=$(bspc query -D -d '^3')<br />
<br />
bspc subscribe node_add | while read -a msg ; do<br />
desk_id=${msg[2]}<br />
wid=${msg[4]}<br />
[ "$FLOATING_DESKTOP_ID" = "$desk_id" ] && bspc node "$wid" -t floating<br />
done<br />
</nowiki><br />
<br />
([https://github.com/baskerville/bspwm/issues/428#issuecomment-199985423 source])<br />
<br />
=== Keyboard ===<br />
<br />
Bspwm does not handle any keyboard input and instead provides the ''bspc'' program as its interface.<br />
<br />
For keyboard shortcuts you will have to setup a hotkey daemon like {{Pkg|sxhkd}} ({{AUR|sxhkd-git}} for the development version).<br />
<br />
== Troubleshooting ==<br />
<br />
=== Blank screen and keybindings don't work ===<br />
<br />
* Make sure {{Pkg|sxhkd}} is installed.<br />
* Make sure you are starting sxhkd (in the background as it is blocking).<br />
* Make sure {{ic|~/.config/bspwm/bspwmrc}} is executable.<br />
<br />
=== Cursor themes don't apply to the desktop ===<br />
<br />
See [[Cursor themes#Change X shaped default cursor]]<br />
<br />
=== Window box larger than the actual application ===<br />
<br />
This can happen if you are using GTK3 apps and usually for dialog windows. The fix is to create or add the below to a gtk3 theme file ({{ic|~/.config/gtk-3.0/gtk.css}}). <br />
<br />
.window-frame, .window-frame:backdrop {<br />
box-shadow: 0 0 0 black;<br />
border-style: none;<br />
margin: 0;<br />
border-radius: 0;<br />
}<br />
<br />
.titlebar {<br />
border-radius: 0;<br />
}<br />
<br />
(source: [https://bbs.archlinux.org/viewtopic.php?pid=1404973#p1404973 Bspwm forum thread])<br />
<br />
=== Problems with Java applications === <br />
<br />
If you have problems, like Java application Windows not resizing, or menus immediately closing after you click, see [[Java#Gray window, applications not resizing with WM, menus immediately closing]].<br />
<br />
Furthermore, some applications based on Java can not display any window content at all (e.g. Intellij IDEs like PyCharm, CLion, etch). A solution is to install {{Pkg|wmname}} and add the following line in your {{ic|~/.config/bspwm/bspwmrc}}:<br />
<br />
wmname LG3D<br />
<br />
=== Problems with keybindings using fish ===<br />
<br />
If you use [[fish]], you will find that you are unable to switch desktops. This is because bspc's use of the ^ character is incompatible with fish. You can fix this by explicitly telling sxhkd to use bash to execute commands:<br />
<br />
$ set -U SXHKD_SHELL /usr/bin/bash<br />
<br />
Alternatively, the ^ character may be escaped with a backslash in your sxhkdrc file.<br />
<br />
=== Performance issues using fish ===<br />
<br />
[[sxhkd]] uses the shell set in the SHELL environment variable in order to execute commands. [[fish]] can have long intialisation time due to large or improperly configured config files, thus all sxhkd commands can take much longer to execute than with other shells. To fix this without changing your default SHELL you can make tell sxhkd explicitly to use bash, or another faster shell to execute commands (for example, sh):<br />
<br />
$ set -U SXHKD_SHELL sh<br />
<br />
=== Error messages "Could not grab key 43 with modfield 68" on start ===<br />
<br />
Either you try to use the same key twice, or you start sxhkd twice. Check bspwmrc and {{ic|~/.profile}} or {{ic|~/.bash_profile}} for excessive commands starting sxhkd.<br />
<br />
=== Firefox context menu automatically selects first option on right click ===<br />
<br />
{{Remove|Should be reported upstream as a software bug}}<br />
<br />
Add the following line to the {{ic|userChrome.css}} file of your Firefox profile:<br />
<br />
{{bc|<nowiki><br />
#contentAreaContextMenu{ margin: 5px 0 0 5px }<br />
</nowiki>}}<br />
<br />
The file should be located in {{ic|~/.mozilla/firefox/''something''.default/chrome/}} (it will need to be created if you don't already have one). Also, in Firefox, you will have to go to the {{ic|about:config}} page and enable the option {{ic|toolkit.legacyUserProfileCustomizations.stylesheets}}; otherwise Firefox will ignore the userChrome.css file.<br />
<br />
== See also ==<br />
<br />
* Mailing List: bspwm ''at'' librelist.com.<br />
* {{ic|#bspwm}} - IRC channel at irc.freenode.net<br />
* https://bbs.archlinux.org/viewtopic.php?id=149444 - Arch BBS thread<br />
* https://github.com/baskerville/bspwm - GitHub project<br />
* https://github.com/windelicato/dotfiles/wiki/bspwm-for-dummies - earsplit's "bspwm for dummies"</div>Af