Talk:LXD

From ArchWiki
Latest comment: 29 February by G3ro in topic LXD unpleasant changes

Removal of "LXD Networking"

The "LXD init" command takes care of network setup as well, so the section is not necessary anymore and it was also outdated.

We could write a new section for networking, but for now that would require too much information, because LXD added more network devices (e.g. macvlan, ipvlan etc.) etc..

I would rely on upstream for documentation; you can find an overview here: https://linuxcontainers.org/lxd/advanced-guide/#networks

Sadly there is no "real" official guide yet, but some blogs and the LXD forum give more detailed instructions on setup etc.

For those who want to add a new section, some notes:

  1. LXD can create new bridges on it's own, with command: lxc network create
  2. You can apply different network devices by editing the container config or specific profiles.
  3. Firewalls can sometimes be a problem, especially Docker is known to set up rules that interfere with LXDs networking.
  4. The different network types have different pros and cons, so for an overview you might describe them (for example macvlan devices allow for using an IP address from the router and work around the hosts firewall, but deny access between host<=>container (over network) by default (needs a workaround)).

G3ro (talk) 19:52, 14 October 2020 (UTC) G3roReply[reply]

Regarding "lxd-agent inside a virtual machine"

Afaik most images now have the lxd-agent implemented by default. So this would only be necessary for the images without implementation. But I don't know if it applies to arch images as well.

G3ro (talk) 21:45, 12 October 2020 (UTC) G3roReply[reply]

I frankly haven't had time to check. If you can test them and see how it works it might be a good idea. Else we can move it to some "Tips and Tricks" section? Foxboron (talk) 21:54, 12 October 2020 (UTC)Reply[reply]
I just found a post in the official forum again (running virtual machines with lxd 4.0), it mentions support for lxd-agent out of the box for "cloud" variants of the images for the following distros:
* Arch Linux
* CentOS (7 and up)
* Debian (8 and up)
* Fedora
* Gentoo
* OpenSUSE
* Ubuntu
Update: It seems the situation is a bit more complicated, because not all distros have a "cloud" variant and not all distro versions support virtual-machines...well, it can't be helped, the situation can change dynamically so I will simply instruct readers to look out for "cloud"-variants, and alternatively point them to the instructions.
I put the instructions into the troubleshooting section yesterday.
G3ro (talk) 13:34, 13 October 2020 (UTC) G3roReply[reply]

Why not suggest enabling the socket, not the service?

From what I can see socket activation is available since day one (3.20). Yet the wiki page suggest to enable the service. Am I missing something? x-yuri (talk) 07:41, 18 October 2022 (UTC)Reply[reply]

Well I have to admit, that I don't know much about the functionality of systemd Sockets; so beware of me saying silly stuff ;D.
First of all IIRC enabling the service is what is also described in the upstream docs.
Do I understand correctly that the Socket will only start the service if someone tries to "call" LXDs unix.socket?
If thats the case, it might not be what every user wants, because containers etc. can also autostart for example.
We might add it as an alternative or in "Tips & Tricks" then instead.
G3ro (talk) 20:47, 18 October 2022 (UTC)Reply[reply]
I think you're basically right. The idea of socket activation... the first implementation I know of one can see in inetd. When systemd sees (during boot) that it needs to activate a socket, it creates the socket and start listening to it. When something connects to it, systemd starts the matching service unit and hands over the socket to the started service.
And yes, that probably means that LXD containers/vms won't autostart. (I'll probably try to confirm it.) But AFAICT the Wiki mainly concerns itself with desktop usage. And on a desktop... Okay, on my machine I need LXD to launch a container/vm, check something, then destroy it. I don't need containers/vms to autostart. And I'm not sure why one would want them to autostart, but I don't rule out that there are other usage patterns. In other words... Tips & Tricks?.. I think this possibility deserves mentioning it at the beginning.
Also, I'm not sure which upstream docs you're referring to, but what I see here:

By default, LXD is socket activated and configured to listen only on a local UNIX socket. While LXD may not be running when you first look at the process listing, any LXC command will start it up.

Considering this, maybe even on an Ubuntu server that's the default. Got to confirm this.
UPD I've just tried it on a Digital Ocean droplet and an AWS EC2 instance (Ubuntu 22.04). lxd for some reason is preinstalled. Anyways, technically it's socket-activated. But snap creates and enables a service (snap.lxd.activate.service), that activates lxd if "LXD has any auto-started instances, instances which were running prior to LXD's last shutdown or if it's configured to listen on the network address."
And yes, if you enable the socket, not the service, containers/vms won't autostart on Arch Linux until you socket-activate the service.
x-yuri (talk) 23:42, 18 October 2022 (UTC)Reply[reply]
Very interesting. Thx for the information.
I agree that we might edit the sentence and even make socket activation the default.
But I would like to see a second sentence mentioning the (direct) service activation in case someone wants LXD to autostart without interaction.
snap.lxd.activate.service is especially interesting, but I assume this is a functionality only available with snap? Otherwise an implementation in the Arch package would be worth considering.
G3ro (talk) 21:37, 19 October 2022 (UTC)Reply[reply]
I changed the text to recommend enabling lxd.socket by default.
See https://wiki.archlinux.org/index.php?title=LXD&oldid=753919
In case you disagree with the wording, feel free to change it.
G3ro (talk) 18:55, 20 October 2022 (UTC)Reply[reply]
I'd say "in case you want lxd (with instances, if any) to always start on boot."
Anyway, I've spent some more time on it, and... it's not hard to make it work the way it works on Ubuntu. You can find the description in a feature request I've created. One option would be to wait for it to be resolved. Another to add it to Tips & Tricks for the time being. That is, if you like the idea (lxd-activateifneeded.service). x-yuri (talk) 10:04, 26 October 2022 (UTC)Reply[reply]
So it's working even without Snap, thats good news.
I would wait until the maintainers decide about the implementention. In case they disagree, we can add it to Tips and tricks. In case they agree, we can modify the general text.
G3ro (talk) 20:05, 26 October 2022 (UTC)Reply[reply]

Configuring OVMF with SecureBoot for Windows guest on LXD

While troubleshooting, I stumbled across the "Starting_a_virtual_machine_fails" section. TL;DR: OVMF don't ship with Secureboot on Arch, disable it on LXD. Digging further, I found a superuser[2] post that explain how to setup OVMF with SecureBoot for Windows using a self-signed PK. You also need to add additional raw.qemu params to force LXD to use it, but in my case disabling SecureBoot wasn't the right (nor the only) solution.

The proper solution was to:

  • Follow the procedure in [2] to generate OVMF_VARS.ms.fd.
  • Place a copy of `OVMF_VARS.ms.fd` in the LXD vm folder (or anywhere LXD can manage). Here my VM is called `w11sb`.
  • Point to it with `raw.qemu` args as follow :
raw.qemu: -drive if=pflash,format=raw,readonly=on,file="/usr/share/OVMF/x64/OVMF_CODE.secboot.fd"
    -drive if=pflash,format=raw,file="/var/lib/lxd/virtual-machines/w11sb/OVMF_VARS.ms.fd"
    -bios /usr/share/OVMF/x64/OVMF_CODE.secboot.fd

Now, I'm writting this talk section because of the following :

  1. Generating the `OVMF_VARS.ms.fd` configuration using procedure [2] doesn't feel right in the LXD page context. Yet, I don't think there's an OVMF specific page. The closest thing would be Testing_UEFI_in_systems_without_native_support [3]
  2. Having to manually manage the `OVMF_VARS.ms.fd` in the LXD internals when you spawn a VM feels quite horrible.
  3. The raw.qemu workaround include an odd trick that I figure out myself through trial and error. I'm not sure if that should be documented or reported. Here are the details as reference :
It is my understanding that you should simply have to point "-bios" to "OVMF_CODE.secboot.fd" to get it working. Yet if I do just that, the VM doesn't load at all. Now you can also use "-drive" to specify the 2 pflash drive manually. If I do that, I get an error where the fd are already taken. So I noticed that when you add the "-bios" field, LXD detects it and ommits to generate the pflash drives in qemu.conf.

References :

Unitiser (talk) 17:37, 10 September 2023 (UTC)!Reply[reply]

Image server not working anymore

Linux Containers project has made the decision to phase out access to image server for _ALL_ LXD users. So commands like:

$ lxc image list images:
$ lxc launch images:archlinux/current/amd64 arch

Does not wiork since 2024/01/15, where LXD 5.20+ users running on Arch, Debian, Fedora, Gentoo, NixOS or Ubuntu will completely lose access.

More info: https://discuss.linuxcontainers.org/t/important-notice-for-lxd-users-image-server/18479

AlonsoLP (talk) 13:22, 17 February 2024 (UTC)Reply[reply]

LXD unpleasant changes

After reading several articles by former maintainers of LXD, such as this one https://discuss.linuxcontainers.org/t/lxd-has-been-re-licensed-and-is-now-under-a-cla/18454 and the link mentioned above (https://discuss.linuxcontainers.org/t/important-notice-for-lxd-users-image-server/18479), I think it would be reasonable to inform users about the situation and about the free fork Incus by former maintainers.


I added a link in the related articles section already, but I would like to add some info to the description as well.

Any opinions or objections? G3ro (talk) 18:20, 29 February 2024 (UTC)Reply[reply]