https://wiki.archlinux.org/api.php?action=feedcontributions&user=Adonm&feedformat=atomArchWiki - User contributions [en]2024-03-28T22:05:11ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=User:Adonm&diff=293857User:Adonm2014-01-21T18:44:52Z<p>Adonm: Created page with " * [http://bitbucket.org/adonm adonm@bitbucket] * [http://adonm.bitbucket.org blog?] * [https://plus.google.com/u/1/+AdonMetcalfe/posts google plus] sigh..."</p>
<hr />
<div> * [http://bitbucket.org/adonm adonm@bitbucket]<br />
* [http://adonm.bitbucket.org blog?]<br />
* [https://plus.google.com/u/1/+AdonMetcalfe/posts google plus] sigh...</div>Adonmhttps://wiki.archlinux.org/index.php?title=Linux_Containers&diff=293856Linux Containers2014-01-21T18:36:05Z<p>Adonm: bertter comment doh</p>
<hr />
<div>[[Category:Security]]<br />
[[Category:Virtualization]]<br />
{{Stub|Some parts of this are dated, ideally this page would be a summary of container tools and discuss LXC, chroot, systemd-nspawn, and docker + the basics required to get each going, with a more detailed subpage on each.}}<br />
<br />
==Introduction==<br />
<br />
===Synopsis===<br />
<br />
Linux Containers (LXC) are an operating system-level virtualization method for running multiple isolated server installs (containers) on a single control host. LXC does not provide a virtual machine, but rather provides a virtual environment that has its own process and network space. It is similar to a chroot, but offers much more isolation.<br />
<br />
===About this HowTo===<br />
<br />
This document is intended as an overview on setting up and deploying containers, and is not an in depth detailed instruction by instruction guide. A certain amount of prerequisite knowledge and skills are assumed (running commands as root, kernel configuration, mounting filesystems, shell scripting, chroot type environments, networking setup, etc).<br />
<br />
Much of this was taken verbatim from [http://lxc.teegra.net/ Dwight Schauer], [http://tuxce.selfip.org/informatique/conteneurs-linux-lxc Tuxce] and [http://artisan.karma-lab.net/node/1749 Ulhume]. It has been copied here both to enable to community to share their collective wisdom and to expand on a few points.<br />
<br />
===Less verbose tutorial===<br />
<br />
[[User:Delerious010|Delerious010]] 21:43, 1 December 2009 (EST) I have come to realize I have added a lot of text to this HowTo. If you would like something more streamlined, please head on over to [http://lxc.teegra.net/ http://lxc.teegra.net/] for Dwight's excellent guide.<br />
<br />
===Testing capabilities===<br />
<br />
Once the lxc package is installed, running lxc-checkconfig will print out a list of your system's capabilities<br />
<br />
==Host configuration==<br />
<br />
===Control group filesystem===<br />
<br />
LXC depends on the control group filesystem being mounted. The standard location for it is {{ic|/sys/fs/cgroup}}. If you use systemd, the cgroup filesystem will be mounted automatically, including the default controllers, but with other initsystems you might have to do it yourself:<br />
<br />
mount -t tmpfs none /sys/fs/cgroup<br />
<br />
===Userspace tools===<br />
<br />
Install {{Pkg|lxc}} from [community]. For networking, you will probably need {{Pkg|bridge-utils}} and {{Pkg|netctl}} or {{Pkg|openvpn}}.<br />
<br />
===Bridge device setup===<br />
<br />
The preferred way to setup a Bridge in Arch is with [[netctl]], and is explained in detail in the article: [[Bridge_with_netctl]]. In the config for your container, just specify the host interface as whatever you name your bridge (usually br0). You can find a skeleton implementation in {{ic|/etc/netctl/examples/bridge}}.<br />
<br />
Alternatively, you can use an [[OpenVPN Bridge]], which is useful if you are already familiar with or running it.<br />
<br />
===NAT device setup===<br />
<br />
If you don't have a device you can easily bridge (such as a wlan) you can instead NAT using [[netctl]] by using the same {{ic|/etc/netctl/examples/bridge}} with the following changes:<br />
<br />
BindsToInterfaces=()<br />
IP=static<br />
Address=192.168.100.1/24<br />
FwdDelay=0<br />
<br />
Remember to copy the example to {{ic|/etc/netctl}} and name it whatever you want. You can use any address range and subnet mask you want for the interface (make sure is one you are not already using). Once this interface is up with netctl start <profile> you need to have [[iptables]] put your external interface in masquerade and you need to enable ip forwarding with [[sysctl]]:<br />
<br />
iptables -t nat -A POSTROUTING -o <external interface such as eth0 or wlan0> -j MASQUERADE<br />
sysctl net.ipv4.ip_forward=1<br />
<br />
To have the nat prepared at boot, and to save the iptables and sysctl states:<br />
<br />
netctl enable <profile><br />
iptables-save > /etc/iptables/iptables.rules<br />
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.d/40-ip-forward.conf<br />
<br />
In your container config file, you will need to assign an IP address:<br />
<br />
lxc.network.ipv4 = 192.168.100.2/24<br />
<br />
When you enter your container, you must set the default gateway to the netctl address, which in this example was 192.168.100.1. In any container including {{Pkg|ip}} the following command will work:<br />
<br />
ip route add default via 192.168.100.1<br />
<br />
Or on distros such as Ubuntu that use /etc/network:<br />
<br />
{{hc|/etc/network/if-up.d/routes|<br />
#! /bin/sh<br />
route add default gw 192.168.100.1<br />
exit 0}}<br />
<br />
===Starting a container on boot with [[Systemd]]===<br />
<br />
If you completed a container, starting it when the host boots is possible with the following systemd service template:<br />
<br />
{{bc|1=<br />
[Unit]<br />
Description=Linux Container %i<br />
After=network.target<br />
<br />
[Service]<br />
Type=forking<br />
ExecStartPre=/bin/mount --make-rprivate /<br />
ExecStart=/usr/bin/lxc-start -dn %i<br />
ExecStop=/usr/bin/lxc-stop -n %i<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
Save this file as {{ic|/etc/systemd/system/lxc@.service}}. Then you can register it with this command:<br />
<br />
systemctl enable lxc@CONTAINER_NAME.service<br />
<br />
==Container setup==<br />
<br />
'''Note''' Configuring a container that runs systemd requires specific configuration that is discussed [[lxc-systemd|here]].<br />
<br />
There are various different means to do this<br />
<br />
===Creating the filesystem===<br />
<br />
====Bootstrap====<br />
Bootstrap an install ( [http://blog.mudy.info/tag/mkarchroot/ mkarchroot], [http://wiki.debian.org/Debootstrap debootstrap], [http://www.xen-tools.org/software/rinse/faq.html rinse], [[Install From Existing Linux]] ). You can also just copy/use an existing installation’s complete root filesystem.<br />
<br />
For example, install a small debian to /home/lxc/debianfs<br />
<br />
yaourt -S debootstrap # install debootstrap from AUR<br />
<br />
# method 1:<br />
sudo debootstrap wheezy /home/lxc/debianfst http://ftp.us.debian.org/debian # use us mirror site install wheezy version<br />
# or, method 2: use faster tar ball method<br />
sudo debootstrap --make-tarball wheezy.packages.tgz sid http://debian.osuosl.org/debian/<br />
sudo debootstrap --unpack-tarball wheezy.packages.tgz wheezy debianfs<br />
<br />
====Download existing====<br />
You can download a base install tar ball. OpenVZ templates work just fine.<br />
<br />
====Using the lxc tools====<br />
/usr/bin/lxc-debian {create|destroy|purge|help}<br />
/usr/bin/lxc-fedora {create|destroy|purge|help}<br />
<br />
Nowadays you can create small and simple archlinux container<br />
# lxc-create -n containername -t archlinux -- -P vim,dhclient<br />
<br />
with the template specific options ''-P'' you can add a list of packages to the installation.<br />
<br />
===Creating the device nodes===<br />
Since [[udev]] does not work within the container, you will want to make sure that a certain minimum amount of devices is created for it. This may be done with the following script: <br />
#!/bin/bash<br />
ROOT=$(pwd)<br />
DEV=${ROOT}/dev<br />
mv ${DEV} ${DEV}.old<br />
mkdir -p ${DEV}<br />
mknod -m 666 ${DEV}/null c 1 3<br />
mknod -m 666 ${DEV}/zero c 1 5<br />
mknod -m 666 ${DEV}/random c 1 8<br />
mknod -m 666 ${DEV}/urandom c 1 9<br />
mkdir -m 755 ${DEV}/pts<br />
mkdir -m 1777 ${DEV}/shm<br />
mknod -m 666 ${DEV}/tty c 5 0<br />
mknod -m 600 ${DEV}/console c 5 1<br />
mknod -m 666 ${DEV}/tty0 c 4 0<br />
mknod -m 666 ${DEV}/full c 1 7<br />
mknod -m 600 ${DEV}/initctl p<br />
mknod -m 666 ${DEV}/ptmx c 5 2<br />
<br />
==Container configuration==<br />
<br />
===Configuration file===<br />
<br />
The main configuration files are used to describe how to originally create a container. Though these files may be located anywhere, /etc/lxc is probably a good place.<br />
<br />
'''23/Aug/2010: Be aware that the kernel may not handle additional whitespace in the configuration file. This has been experienced on "lxc.cgroup.devices.allow" settings but may also be true on other settings. If in doubt use only one space wherever whitespace is required.'''<br />
<br />
====Basic settings====<br />
<br />
lxc.utsname = $CONTAINER_NAME<br><br />
lxc.mount = $CONTAINER_FSTAB<br />
lxc.rootfs = $CONTAINER_ROOTFS<br><br />
lxc.network.type = veth<br />
lxc.network.flags = up<br />
lxc.network.link = br0<br />
lxc.network.hwaddr = $CONTAINER_MACADDR <br />
lxc.network.ipv4 = $CONTAINER_IPADDR<br />
lxc.network.name = $CONTAINER_DEVICENAME<br />
<br />
=====Basic settings explained=====<br />
<br />
'''lxc.utsname''' : This will be the name of the cgroup for the container. Once the container is started, you should be able to see a new folder named ''/cgroup/$CONTAINER_NAME''.<br />
<br />
Furthermore, this will also be the value returned by ''hostname'' from within the container. Assuming you have not removed access, the container may overwrite this with it's init script.<br />
<br />
'''lxc.mount''' : This points to an fstab formatted file that is a listing of the mount points used when ''lxc-start'' is called. This file is further explained [[#Configuring fstab|further]]<br />
<br />
====Terminal settings====<br />
<br />
The following configuration is optional. You may add them to your main configuration file if you wish to login via lxc-console, or through a terminal ( e.g.: {{ic|Ctrl+Alt+F1}} ).<br />
<br />
The container can be configured with virtual consoles (tty devices). These may be devices from the host that the container is given permission to use (by its configuration file) or they may be devices created locally within the container.<br />
<br />
The host's virtual consoles are accessed using the key sequence {{ic|Alt+Fn}} (or {{ic|Ctrl+Alt+Fn}} from within an X11 session). The left {{ic|Alt}} key reaches consoles 1 through 12 and the right {{ic|Alt}} key reaches consoles 13 through 24. Further virtual consoles may be reached by the {{ic|Alt+→}} key sequence which steps to the next virtual console.<br />
<br />
The container's local virtual consoles may be accessed using the "lxc-console" command.<br />
<br />
===== Host Virtual Consoles =====<br />
<br />
The container may access the host's virtual consoles if the host is not using them and the container's configuration allows it. Typical container configuration would deny access to all devices and then allow access to specific devices like this:<br />
<br />
lxc.cgroup.devices.deny = a # Deny all access to devices<br />
lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0<br />
lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1<br />
lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2<br />
<br />
For a container to be able to use a host's virtual console it must not be in use by the host. This will most likely require the host's {{ic|/etc/inittab}} to be modified to ensure no getty or other process runs on any virtual console that is to be used by the container.<br />
<br />
After editing the host's {{ic|/etc/inittab}} file, issung a {{ic|killall -HUP init}} will terminate any getty processes that are no longer configured and this will free up the virtual conosole for use by the container.<br />
<br />
Note that local virtual consoles take precedence over host virtual consoles. This is described in the next section.<br />
<br />
===== Local Virtual Consoles =====<br />
<br />
The number of local virtual consoles that the container has is defined in the container's configuration file (normally on the host in {{ic|/etc/lxc}}). It is defined thus:<br />
<br />
lxc.tty = n<br />
<br />
where {{ic|n}} is the number of local virtual consoles required.<br />
<br />
The local virtual consoles are numbered starting at tty1 and take precedence over any of the host's virtual consoles that the container might be entitled to use. This means that, for example, if n = 2 then the container will not be able to use the host's tty1 and tty2 devices even entitled to do so by its configuration file. Setting n to 0 will prevent local virtual consoles from being created thus allowing full access to any of host's virtual consoles that the container might be entitled to use.<br />
<br />
===== /dev/tty Device Files =====<br />
The container must have a tty device file (e.g. {{ic|/dev/tty1}}) for each virtual console (host or local). These can be created thus:<br />
# mknod -m 666 /dev/tty1 c 4 1<br />
# mknod -m 666 /dev/tty2 c 4 2<br />
<br />
and so on...<br />
<br />
In the above, {{ic|c}} means character device, {{ic|4}} is the major device number (tty devices) and {{ic|1}}, {{ic|2}}, {{ic|3}}, etc., is the minor device number (specific tty device). Note that {{ic|/dev/tty0}} is special and always refers to the current virtual console.<br />
<br />
For further info on tty devices, read this: http://www.kernel.org/pub/linux/docs/device-list/devices.txt<br />
<br />
'''If a virtual console's device file does not exist in the container, then the container cannot use the virtual console.'''<br />
<br />
===== Configuring Log-In Ability =====<br />
<br />
The container's virtual consoles may be used for login sessions if the container runs "getty" services on their tty devices. This is normally done by the container's "init" process and is configured in the container's {{ic|/etc/inittab}} file using lines like this:<br />
<br />
c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
<br />
There is one line per device. The first part {{ic|c1}} is just a unique label, the second part defines applicable run levels, the third part tells init to start a new getty when the current one terminates and the last part gives the command line for the getty. For further information refer to {{ic|man init}}.<br />
<br />
If there is no getty process on a virtual console it will not be possible to log in via that virtual console. A getty is not required on a virtual console unless it is to be used to log in.<br />
<br />
If a virtual console is to allow root logins it also needs to be listed in the container's {{ic|/etc/securetty}} file.<br />
<br />
===== Troubleshooting virtual consoles =====<br />
<br />
If lxc.tty is set to a number, n, then no host devices numbered n or below will be accessible even if the above configuration is present because they will be replaced with local virtual consoles instead.<br />
<br />
A tty device file's major number will change from 4 to 136 if it is a local virtual console. This change is visible within the container but not when viewing the container's devices from the host's filesystem. This information is useful when troubleshooting.<br />
<br />
This can be checked from within a container thus:<br />
<br />
# ls -Al /dev/tty*<br />
crw------- 1 root root 136, 10 Aug 21 21:28 /dev/tty1<br />
crw------- 1 root root 4, 2 Aug 21 21:28 /dev/tty2<br />
<br />
===== Pseudo Terminals =====<br />
<br />
lxc.pseudo = 1024<br />
<br />
Maximum amount of pseudo terminals that may be created in {{ic|/dev/pts}}. Currently, assuming the kernel was compiled with {{ic|CONFIG_DEVPTS_MULTIPLE_INSTANCES}}, this tells lxc-start to mount the devpts filesystem with the newinstance flag.<br />
<br />
====Host device access settings====<br />
<br />
lxc.cgroup.devices.deny = a # Deny all access to devices<br><br />
lxc.cgroup.devices.allow = c 1:3 rwm # dev/null<br />
lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero<br><br />
lxc.cgroup.devices.allow = c 5:1 rwm # dev/console<br />
lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty<br />
lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0<br><br />
lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandom<br />
lxc.cgroup.devices.allow = c 1:8 rwm # dev/random<br />
lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/*<br />
lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx<br><br />
# No idea what this is .. dev/bsg/0:0:0:0 ???<br />
lxc.cgroup.devices.allow = c 254:0 rwm<br />
<br />
=====Host device access settings explained=====<br />
<br />
'''lxc.cgroup.devices.deny''' : By settings this to ''a'', we are stating that the container has access to no devices unless explicitely defined within the configuration file.<br />
<br />
===Configuration file notes===<br />
====At runtime /dev/ttyX devices are recreated====<br />
If you have enabled multiple DevPTS instances in your kernel, lxc-start will recreate ''lxc.tty'' amount of {{ic|/dev/ttyX}} devices when it is executed.<br />
<br />
This means that you will have ''lxc.tty'' amount of pseudo ttys. If you are planning on accessing the container via a "real" terminal ({{ic|Ctrl+Alt+FX}}), make sure that it is a number that is inferior to ''lxc.tty''.<br />
<br />
To tell whether it has been re-created, just log in to the container via either lxc-console or SSH and perform a {{ic|ls -Al}} command on the tty. Devices with a major number of 4 are "real" tty devices whereas a major number of 136 indicates a pts.<br />
<br />
Be aware that this is only visible from within the container itself and not from the host.<br />
<br />
====Containers have access to host's TTY nodes====<br />
<br />
If you do not properly restrict the container's access to the /dev/tty nodes, the container may have access to the host's.<br />
<br />
Taking into consideration that, as previously mentioned, lxc-start recreates ''lxc.tty'' amount of /dev/tty devices, any tty nodes present in the container that are of a greater minor number than ''lxc.tty'' will be linked to the host's.<br />
<br />
=====To access the container from a host TTY=====<br />
<br />
# On the host, verify no getty is started for that tty by checking ''/etc/inittab''.<br />
# In the container, start a getty for that tty.<br />
<br />
=====To prevent access to the host TTY=====<br />
<br />
Please have a look at the configuration statements found in [[#Host device access settings|host device access settings]].<br />
<br />
Via the ''lxc.cgroup.devices.deny = a'' we are preventing access to all host level devices. And then, throuh ''lxc.cgroup.devices.allow = c 4:'''1''' rwm'' we are allowing access to the host's /dev/tty'''1'''. In the above example, simply removing all allow statements for major number 4 and minor > 1 should be sufficient.<br />
<br />
=====To test this access=====<br />
<br />
I may be off here, but looking at the output of the ''ls'' command below should show you both the ''major'' and ''minor'' device numbers. These are located after the user and group and represented as : 4, 2<br />
<br />
# Set lxc.tty to 1<br />
# Make there that the container has dev/tty1 and /dev/tty2<br />
# ''lxc-start'' the container<br />
# ''lxc-console'' into the container<br />
# ''ls -Al /dev/tty''<br>crw------- 1 root root 4, 2 Dec 2 00:20 /dev/tty2<br />
# ''echo "test output" > /dev/tty2''<br />
# ''Ctrl+Alt+F2'' to view the host's second terminal<br />
# You should see "test output" printed on the screen<br />
<br />
====Configuration troubleshooting====<br />
<br />
=====console access denied: Permission denied=====<br />
<br />
If, when executing lxc-console, you receive the error ''lxc-console: console access denied: Permission denied'' you have most likely either omitted lxc.tty or set it to 0.<br />
<br />
=====lxc-console does not provide a login prompt=====<br />
<br />
Though you are reaching a tty on the container, it most likely is not running a getty. You will want to double check that you have a getty defined in the container's ''/etc/inittab'' for the specific tty.<br />
<br />
If using '''systemd''' chances are that a problem with the ''getty@.service'' script will bite you. The script only starts a getty if ''/dev/tty0'' exists. And since this condition is not met in the container, you get no getty. Use this patch, to let ''lxc-console'' finally work.<br />
<br />
<pre><br />
--- /usr/lib/systemd/system/getty@.service.orig 2013-05-30 12:55:28.000000000 +0000<br />
+++ /usr/lib/systemd/system/getty@.service 2013-06-16 23:05:49.827146901 +0000<br />
@@ -20,7 +20,8 @@<br />
# On systems without virtual consoles, don't start any getty. (Note<br />
# that serial gettys are covered by serial-getty@.service, not this<br />
# unit<br />
-ConditionPathExists=/dev/tty0<br />
+ConditionVirtualization=|lxc<br />
+ConditionPathExists=|/dev/tty0<br />
<br />
[Service]<br />
# the VT is cleared by TTYVTDisallocate<br />
</pre><br />
<br />
For more than one getty you have to explicitly enable the needed service (and decrease ''lxc.tty'' in the container configuration). In the ''real'' system a configurable number of getty-services is automatically created from the ''systemd-logind.service''<br />
<br />
===Configuring fstab===<br />
none $CONTAINER_ROOTFS/dev/pts devpts defaults 0 0<br />
none $CONTAINER_ROOTFS/proc proc defaults 0 0<br />
none $CONTAINER_ROOTFS/sys sysfs defaults 0 0<br />
none $CONTAINER_ROOTFS/dev/shm tmpfs defaults 0 0<br />
<br />
This fstab is used by lxc-start when mounting the container. As such, you can define any mount that would be possible on the host such as bind mounting to the host's own filesystem. However, please be aware of any and all security implications that this may have.<br />
<br />
'''Warning''' : You certainly do not want to bind mount the host's /dev to the container as this would allow it to, amongst other things, reboot the host.<br />
<br />
==Container Creation and Destruction==<br />
<br />
===Creation===<br />
lxc-create -f $CONTAINER_CONFIGPATH -n $CONTAINER_NAME<br />
<br />
''lxc-create'' will create /var/lib/lxc/$CONTAINER_NAME with a new copy of the container configuration file found in $CONTAINER_CONFIGPATH.<br />
<br />
As such, if you need to make modifications to the container's configuration file, it's advisable to modify only the original file and then perform ''lxc-destroy'' and ''lxc-create'' operations afterwards. No data will be lost by doing this.<br />
<br />
'''Note''' : When copying the file over, lxc-create will strip all comments from the file.<br />
<br />
'''Note''' : As of lxc-git from atleast ''2009-12-01'', performing lxc-create no longer splits the config file into multiple files and folders. Therefore, we only have the configuration file to worry about.<br />
<br />
===Destruction===<br />
lxc-destroy -n $CONTAINER_NAME<br />
<br />
This will delete /var/lib/lxc/$CONTAINER_NAME which only contains configuration files. No data will be lost.<br />
<br />
==Readying the host for virtualization==<br />
===/etc/inittab===<br />
# Comment out any getty that are not required<br />
<br />
===/etc/rc.sysinit replacement===<br />
Since we are running in a virtual environment, a number of steps undertaken by rc.sysinit are superfluous and may even flat out fail or stall. As such, until the initscripts are made virtualization aware, this will take some hack and slash.<br />
<br />
For now, simply replace the file : <br />
#!/bin/bash<br />
# Whatever is needed to clean out old daemon/service pids from your container<br />
rm -f $(find /var/run -name '*pid')<br />
rm -f /var/lock/subsys/*<br><br />
# Configure network settings<br />
## You can either use dhcp here, manually configure your<br />
## interfaces or try to get the rc.d/network script working.<br />
## There have been reports that network failed in this<br />
## environment.<br />
ip route add default via 192.168.10.1<br />
echo > /etc/resolv.conf search your-domain<br />
echo >> /etc/resolv.conf nameserver 192.168.10.1<br><br />
# Initally we do not have any container originated mounts<br />
rm -f /etc/mtab<br />
touch /etc/mtab<br />
<br />
===/etc/rc.conf cleanup===<br />
You may want to remove any and all hardware related daemons from the DAEMONS line. Furthermore, depending on your situation, you may also want to remove the ''network'' daemon.<br />
<br />
===TBC===<br />
<br />
==Known Problems==<br />
<br />
===Using systemd inside a docker container results in a segfault===<br />
<br />
See [https://github.com/dotcloud/docker/issues/3629 docker github issue], launching /usr/lib/systemd/systemd --system results in a segfault, last tested with systemd 208-10.<br />
<br />
===Container cannot be shutdown if using systemd===<br />
''lxc-shutdown'' should be used for clean shutdown or reboot of the container, but only the ''reboot'' is working out of the box when using systemd.<br />
<br />
Shutdown will be signalled to the container with ''SIGPWR'' but current systemd doesn't have any services in place to handle the ''sigpwr.target''. But for the container we can simply reuse the ''poweroff.target'' and get exactly what we want.<br />
# ln -s /usr/lib/systemd/system/poweroff.target ${CONTAINER_RFS}/etc/systemd/system/sigpwr.target<br />
<br />
==See Also==<br />
*[[Arch systemd container]]<br />
*[http://www.ibm.com/developerworks/linux/library/l-lxc-containers/ LXC@developerWorks]<br />
*[http://docs.docker.io/en/latest/installation/archlinux/ Docker Installation on ArchLinux]</div>Adonmhttps://wiki.archlinux.org/index.php?title=Linux_Containers&diff=293855Linux Containers2014-01-21T18:34:42Z<p>Adonm: /* Introduction */ inline comment proposing more structured layout of container tools</p>
<hr />
<div>[[Category:Security]]<br />
[[Category:Virtualization]]<br />
{{Stub|Currently just a rough draft... I think I will need to restructure this a bit and I have also noticed I have become a bit too verbose -_-;; I will be along shortly to complete this as well as clean it up.}}<br />
<br />
==Introduction==<br />
''(Ideally this page would be a summary of container tools and discuss LXC, chroot, systemd-nspawn, and docker + the basics required to get each going, with a more detailed subpage on each)''<br />
<br />
===Synopsis===<br />
<br />
Linux Containers (LXC) are an operating system-level virtualization method for running multiple isolated server installs (containers) on a single control host. LXC does not provide a virtual machine, but rather provides a virtual environment that has its own process and network space. It is similar to a chroot, but offers much more isolation.<br />
<br />
===About this HowTo===<br />
<br />
This document is intended as an overview on setting up and deploying containers, and is not an in depth detailed instruction by instruction guide. A certain amount of prerequisite knowledge and skills are assumed (running commands as root, kernel configuration, mounting filesystems, shell scripting, chroot type environments, networking setup, etc).<br />
<br />
Much of this was taken verbatim from [http://lxc.teegra.net/ Dwight Schauer], [http://tuxce.selfip.org/informatique/conteneurs-linux-lxc Tuxce] and [http://artisan.karma-lab.net/node/1749 Ulhume]. It has been copied here both to enable to community to share their collective wisdom and to expand on a few points.<br />
<br />
===Less verbose tutorial===<br />
<br />
[[User:Delerious010|Delerious010]] 21:43, 1 December 2009 (EST) I have come to realize I have added a lot of text to this HowTo. If you would like something more streamlined, please head on over to [http://lxc.teegra.net/ http://lxc.teegra.net/] for Dwight's excellent guide.<br />
<br />
===Testing capabilities===<br />
<br />
Once the lxc package is installed, running lxc-checkconfig will print out a list of your system's capabilities<br />
<br />
==Host configuration==<br />
<br />
===Control group filesystem===<br />
<br />
LXC depends on the control group filesystem being mounted. The standard location for it is {{ic|/sys/fs/cgroup}}. If you use systemd, the cgroup filesystem will be mounted automatically, including the default controllers, but with other initsystems you might have to do it yourself:<br />
<br />
mount -t tmpfs none /sys/fs/cgroup<br />
<br />
===Userspace tools===<br />
<br />
Install {{Pkg|lxc}} from [community]. For networking, you will probably need {{Pkg|bridge-utils}} and {{Pkg|netctl}} or {{Pkg|openvpn}}.<br />
<br />
===Bridge device setup===<br />
<br />
The preferred way to setup a Bridge in Arch is with [[netctl]], and is explained in detail in the article: [[Bridge_with_netctl]]. In the config for your container, just specify the host interface as whatever you name your bridge (usually br0). You can find a skeleton implementation in {{ic|/etc/netctl/examples/bridge}}.<br />
<br />
Alternatively, you can use an [[OpenVPN Bridge]], which is useful if you are already familiar with or running it.<br />
<br />
===NAT device setup===<br />
<br />
If you don't have a device you can easily bridge (such as a wlan) you can instead NAT using [[netctl]] by using the same {{ic|/etc/netctl/examples/bridge}} with the following changes:<br />
<br />
BindsToInterfaces=()<br />
IP=static<br />
Address=192.168.100.1/24<br />
FwdDelay=0<br />
<br />
Remember to copy the example to {{ic|/etc/netctl}} and name it whatever you want. You can use any address range and subnet mask you want for the interface (make sure is one you are not already using). Once this interface is up with netctl start <profile> you need to have [[iptables]] put your external interface in masquerade and you need to enable ip forwarding with [[sysctl]]:<br />
<br />
iptables -t nat -A POSTROUTING -o <external interface such as eth0 or wlan0> -j MASQUERADE<br />
sysctl net.ipv4.ip_forward=1<br />
<br />
To have the nat prepared at boot, and to save the iptables and sysctl states:<br />
<br />
netctl enable <profile><br />
iptables-save > /etc/iptables/iptables.rules<br />
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.d/40-ip-forward.conf<br />
<br />
In your container config file, you will need to assign an IP address:<br />
<br />
lxc.network.ipv4 = 192.168.100.2/24<br />
<br />
When you enter your container, you must set the default gateway to the netctl address, which in this example was 192.168.100.1. In any container including {{Pkg|ip}} the following command will work:<br />
<br />
ip route add default via 192.168.100.1<br />
<br />
Or on distros such as Ubuntu that use /etc/network:<br />
<br />
{{hc|/etc/network/if-up.d/routes|<br />
#! /bin/sh<br />
route add default gw 192.168.100.1<br />
exit 0}}<br />
<br />
===Starting a container on boot with [[Systemd]]===<br />
<br />
If you completed a container, starting it when the host boots is possible with the following systemd service template:<br />
<br />
{{bc|1=<br />
[Unit]<br />
Description=Linux Container %i<br />
After=network.target<br />
<br />
[Service]<br />
Type=forking<br />
ExecStartPre=/bin/mount --make-rprivate /<br />
ExecStart=/usr/bin/lxc-start -dn %i<br />
ExecStop=/usr/bin/lxc-stop -n %i<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
Save this file as {{ic|/etc/systemd/system/lxc@.service}}. Then you can register it with this command:<br />
<br />
systemctl enable lxc@CONTAINER_NAME.service<br />
<br />
==Container setup==<br />
<br />
'''Note''' Configuring a container that runs systemd requires specific configuration that is discussed [[lxc-systemd|here]].<br />
<br />
There are various different means to do this<br />
<br />
===Creating the filesystem===<br />
<br />
====Bootstrap====<br />
Bootstrap an install ( [http://blog.mudy.info/tag/mkarchroot/ mkarchroot], [http://wiki.debian.org/Debootstrap debootstrap], [http://www.xen-tools.org/software/rinse/faq.html rinse], [[Install From Existing Linux]] ). You can also just copy/use an existing installation’s complete root filesystem.<br />
<br />
For example, install a small debian to /home/lxc/debianfs<br />
<br />
yaourt -S debootstrap # install debootstrap from AUR<br />
<br />
# method 1:<br />
sudo debootstrap wheezy /home/lxc/debianfst http://ftp.us.debian.org/debian # use us mirror site install wheezy version<br />
# or, method 2: use faster tar ball method<br />
sudo debootstrap --make-tarball wheezy.packages.tgz sid http://debian.osuosl.org/debian/<br />
sudo debootstrap --unpack-tarball wheezy.packages.tgz wheezy debianfs<br />
<br />
====Download existing====<br />
You can download a base install tar ball. OpenVZ templates work just fine.<br />
<br />
====Using the lxc tools====<br />
/usr/bin/lxc-debian {create|destroy|purge|help}<br />
/usr/bin/lxc-fedora {create|destroy|purge|help}<br />
<br />
Nowadays you can create small and simple archlinux container<br />
# lxc-create -n containername -t archlinux -- -P vim,dhclient<br />
<br />
with the template specific options ''-P'' you can add a list of packages to the installation.<br />
<br />
===Creating the device nodes===<br />
Since [[udev]] does not work within the container, you will want to make sure that a certain minimum amount of devices is created for it. This may be done with the following script: <br />
#!/bin/bash<br />
ROOT=$(pwd)<br />
DEV=${ROOT}/dev<br />
mv ${DEV} ${DEV}.old<br />
mkdir -p ${DEV}<br />
mknod -m 666 ${DEV}/null c 1 3<br />
mknod -m 666 ${DEV}/zero c 1 5<br />
mknod -m 666 ${DEV}/random c 1 8<br />
mknod -m 666 ${DEV}/urandom c 1 9<br />
mkdir -m 755 ${DEV}/pts<br />
mkdir -m 1777 ${DEV}/shm<br />
mknod -m 666 ${DEV}/tty c 5 0<br />
mknod -m 600 ${DEV}/console c 5 1<br />
mknod -m 666 ${DEV}/tty0 c 4 0<br />
mknod -m 666 ${DEV}/full c 1 7<br />
mknod -m 600 ${DEV}/initctl p<br />
mknod -m 666 ${DEV}/ptmx c 5 2<br />
<br />
==Container configuration==<br />
<br />
===Configuration file===<br />
<br />
The main configuration files are used to describe how to originally create a container. Though these files may be located anywhere, /etc/lxc is probably a good place.<br />
<br />
'''23/Aug/2010: Be aware that the kernel may not handle additional whitespace in the configuration file. This has been experienced on "lxc.cgroup.devices.allow" settings but may also be true on other settings. If in doubt use only one space wherever whitespace is required.'''<br />
<br />
====Basic settings====<br />
<br />
lxc.utsname = $CONTAINER_NAME<br><br />
lxc.mount = $CONTAINER_FSTAB<br />
lxc.rootfs = $CONTAINER_ROOTFS<br><br />
lxc.network.type = veth<br />
lxc.network.flags = up<br />
lxc.network.link = br0<br />
lxc.network.hwaddr = $CONTAINER_MACADDR <br />
lxc.network.ipv4 = $CONTAINER_IPADDR<br />
lxc.network.name = $CONTAINER_DEVICENAME<br />
<br />
=====Basic settings explained=====<br />
<br />
'''lxc.utsname''' : This will be the name of the cgroup for the container. Once the container is started, you should be able to see a new folder named ''/cgroup/$CONTAINER_NAME''.<br />
<br />
Furthermore, this will also be the value returned by ''hostname'' from within the container. Assuming you have not removed access, the container may overwrite this with it's init script.<br />
<br />
'''lxc.mount''' : This points to an fstab formatted file that is a listing of the mount points used when ''lxc-start'' is called. This file is further explained [[#Configuring fstab|further]]<br />
<br />
====Terminal settings====<br />
<br />
The following configuration is optional. You may add them to your main configuration file if you wish to login via lxc-console, or through a terminal ( e.g.: {{ic|Ctrl+Alt+F1}} ).<br />
<br />
The container can be configured with virtual consoles (tty devices). These may be devices from the host that the container is given permission to use (by its configuration file) or they may be devices created locally within the container.<br />
<br />
The host's virtual consoles are accessed using the key sequence {{ic|Alt+Fn}} (or {{ic|Ctrl+Alt+Fn}} from within an X11 session). The left {{ic|Alt}} key reaches consoles 1 through 12 and the right {{ic|Alt}} key reaches consoles 13 through 24. Further virtual consoles may be reached by the {{ic|Alt+→}} key sequence which steps to the next virtual console.<br />
<br />
The container's local virtual consoles may be accessed using the "lxc-console" command.<br />
<br />
===== Host Virtual Consoles =====<br />
<br />
The container may access the host's virtual consoles if the host is not using them and the container's configuration allows it. Typical container configuration would deny access to all devices and then allow access to specific devices like this:<br />
<br />
lxc.cgroup.devices.deny = a # Deny all access to devices<br />
lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0<br />
lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1<br />
lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2<br />
<br />
For a container to be able to use a host's virtual console it must not be in use by the host. This will most likely require the host's {{ic|/etc/inittab}} to be modified to ensure no getty or other process runs on any virtual console that is to be used by the container.<br />
<br />
After editing the host's {{ic|/etc/inittab}} file, issung a {{ic|killall -HUP init}} will terminate any getty processes that are no longer configured and this will free up the virtual conosole for use by the container.<br />
<br />
Note that local virtual consoles take precedence over host virtual consoles. This is described in the next section.<br />
<br />
===== Local Virtual Consoles =====<br />
<br />
The number of local virtual consoles that the container has is defined in the container's configuration file (normally on the host in {{ic|/etc/lxc}}). It is defined thus:<br />
<br />
lxc.tty = n<br />
<br />
where {{ic|n}} is the number of local virtual consoles required.<br />
<br />
The local virtual consoles are numbered starting at tty1 and take precedence over any of the host's virtual consoles that the container might be entitled to use. This means that, for example, if n = 2 then the container will not be able to use the host's tty1 and tty2 devices even entitled to do so by its configuration file. Setting n to 0 will prevent local virtual consoles from being created thus allowing full access to any of host's virtual consoles that the container might be entitled to use.<br />
<br />
===== /dev/tty Device Files =====<br />
The container must have a tty device file (e.g. {{ic|/dev/tty1}}) for each virtual console (host or local). These can be created thus:<br />
# mknod -m 666 /dev/tty1 c 4 1<br />
# mknod -m 666 /dev/tty2 c 4 2<br />
<br />
and so on...<br />
<br />
In the above, {{ic|c}} means character device, {{ic|4}} is the major device number (tty devices) and {{ic|1}}, {{ic|2}}, {{ic|3}}, etc., is the minor device number (specific tty device). Note that {{ic|/dev/tty0}} is special and always refers to the current virtual console.<br />
<br />
For further info on tty devices, read this: http://www.kernel.org/pub/linux/docs/device-list/devices.txt<br />
<br />
'''If a virtual console's device file does not exist in the container, then the container cannot use the virtual console.'''<br />
<br />
===== Configuring Log-In Ability =====<br />
<br />
The container's virtual consoles may be used for login sessions if the container runs "getty" services on their tty devices. This is normally done by the container's "init" process and is configured in the container's {{ic|/etc/inittab}} file using lines like this:<br />
<br />
c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
<br />
There is one line per device. The first part {{ic|c1}} is just a unique label, the second part defines applicable run levels, the third part tells init to start a new getty when the current one terminates and the last part gives the command line for the getty. For further information refer to {{ic|man init}}.<br />
<br />
If there is no getty process on a virtual console it will not be possible to log in via that virtual console. A getty is not required on a virtual console unless it is to be used to log in.<br />
<br />
If a virtual console is to allow root logins it also needs to be listed in the container's {{ic|/etc/securetty}} file.<br />
<br />
===== Troubleshooting virtual consoles =====<br />
<br />
If lxc.tty is set to a number, n, then no host devices numbered n or below will be accessible even if the above configuration is present because they will be replaced with local virtual consoles instead.<br />
<br />
A tty device file's major number will change from 4 to 136 if it is a local virtual console. This change is visible within the container but not when viewing the container's devices from the host's filesystem. This information is useful when troubleshooting.<br />
<br />
This can be checked from within a container thus:<br />
<br />
# ls -Al /dev/tty*<br />
crw------- 1 root root 136, 10 Aug 21 21:28 /dev/tty1<br />
crw------- 1 root root 4, 2 Aug 21 21:28 /dev/tty2<br />
<br />
===== Pseudo Terminals =====<br />
<br />
lxc.pseudo = 1024<br />
<br />
Maximum amount of pseudo terminals that may be created in {{ic|/dev/pts}}. Currently, assuming the kernel was compiled with {{ic|CONFIG_DEVPTS_MULTIPLE_INSTANCES}}, this tells lxc-start to mount the devpts filesystem with the newinstance flag.<br />
<br />
====Host device access settings====<br />
<br />
lxc.cgroup.devices.deny = a # Deny all access to devices<br><br />
lxc.cgroup.devices.allow = c 1:3 rwm # dev/null<br />
lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero<br><br />
lxc.cgroup.devices.allow = c 5:1 rwm # dev/console<br />
lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty<br />
lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0<br><br />
lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandom<br />
lxc.cgroup.devices.allow = c 1:8 rwm # dev/random<br />
lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/*<br />
lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx<br><br />
# No idea what this is .. dev/bsg/0:0:0:0 ???<br />
lxc.cgroup.devices.allow = c 254:0 rwm<br />
<br />
=====Host device access settings explained=====<br />
<br />
'''lxc.cgroup.devices.deny''' : By settings this to ''a'', we are stating that the container has access to no devices unless explicitely defined within the configuration file.<br />
<br />
===Configuration file notes===<br />
====At runtime /dev/ttyX devices are recreated====<br />
If you have enabled multiple DevPTS instances in your kernel, lxc-start will recreate ''lxc.tty'' amount of {{ic|/dev/ttyX}} devices when it is executed.<br />
<br />
This means that you will have ''lxc.tty'' amount of pseudo ttys. If you are planning on accessing the container via a "real" terminal ({{ic|Ctrl+Alt+FX}}), make sure that it is a number that is inferior to ''lxc.tty''.<br />
<br />
To tell whether it has been re-created, just log in to the container via either lxc-console or SSH and perform a {{ic|ls -Al}} command on the tty. Devices with a major number of 4 are "real" tty devices whereas a major number of 136 indicates a pts.<br />
<br />
Be aware that this is only visible from within the container itself and not from the host.<br />
<br />
====Containers have access to host's TTY nodes====<br />
<br />
If you do not properly restrict the container's access to the /dev/tty nodes, the container may have access to the host's.<br />
<br />
Taking into consideration that, as previously mentioned, lxc-start recreates ''lxc.tty'' amount of /dev/tty devices, any tty nodes present in the container that are of a greater minor number than ''lxc.tty'' will be linked to the host's.<br />
<br />
=====To access the container from a host TTY=====<br />
<br />
# On the host, verify no getty is started for that tty by checking ''/etc/inittab''.<br />
# In the container, start a getty for that tty.<br />
<br />
=====To prevent access to the host TTY=====<br />
<br />
Please have a look at the configuration statements found in [[#Host device access settings|host device access settings]].<br />
<br />
Via the ''lxc.cgroup.devices.deny = a'' we are preventing access to all host level devices. And then, throuh ''lxc.cgroup.devices.allow = c 4:'''1''' rwm'' we are allowing access to the host's /dev/tty'''1'''. In the above example, simply removing all allow statements for major number 4 and minor > 1 should be sufficient.<br />
<br />
=====To test this access=====<br />
<br />
I may be off here, but looking at the output of the ''ls'' command below should show you both the ''major'' and ''minor'' device numbers. These are located after the user and group and represented as : 4, 2<br />
<br />
# Set lxc.tty to 1<br />
# Make there that the container has dev/tty1 and /dev/tty2<br />
# ''lxc-start'' the container<br />
# ''lxc-console'' into the container<br />
# ''ls -Al /dev/tty''<br>crw------- 1 root root 4, 2 Dec 2 00:20 /dev/tty2<br />
# ''echo "test output" > /dev/tty2''<br />
# ''Ctrl+Alt+F2'' to view the host's second terminal<br />
# You should see "test output" printed on the screen<br />
<br />
====Configuration troubleshooting====<br />
<br />
=====console access denied: Permission denied=====<br />
<br />
If, when executing lxc-console, you receive the error ''lxc-console: console access denied: Permission denied'' you have most likely either omitted lxc.tty or set it to 0.<br />
<br />
=====lxc-console does not provide a login prompt=====<br />
<br />
Though you are reaching a tty on the container, it most likely is not running a getty. You will want to double check that you have a getty defined in the container's ''/etc/inittab'' for the specific tty.<br />
<br />
If using '''systemd''' chances are that a problem with the ''getty@.service'' script will bite you. The script only starts a getty if ''/dev/tty0'' exists. And since this condition is not met in the container, you get no getty. Use this patch, to let ''lxc-console'' finally work.<br />
<br />
<pre><br />
--- /usr/lib/systemd/system/getty@.service.orig 2013-05-30 12:55:28.000000000 +0000<br />
+++ /usr/lib/systemd/system/getty@.service 2013-06-16 23:05:49.827146901 +0000<br />
@@ -20,7 +20,8 @@<br />
# On systems without virtual consoles, don't start any getty. (Note<br />
# that serial gettys are covered by serial-getty@.service, not this<br />
# unit<br />
-ConditionPathExists=/dev/tty0<br />
+ConditionVirtualization=|lxc<br />
+ConditionPathExists=|/dev/tty0<br />
<br />
[Service]<br />
# the VT is cleared by TTYVTDisallocate<br />
</pre><br />
<br />
For more than one getty you have to explicitly enable the needed service (and decrease ''lxc.tty'' in the container configuration). In the ''real'' system a configurable number of getty-services is automatically created from the ''systemd-logind.service''<br />
<br />
===Configuring fstab===<br />
none $CONTAINER_ROOTFS/dev/pts devpts defaults 0 0<br />
none $CONTAINER_ROOTFS/proc proc defaults 0 0<br />
none $CONTAINER_ROOTFS/sys sysfs defaults 0 0<br />
none $CONTAINER_ROOTFS/dev/shm tmpfs defaults 0 0<br />
<br />
This fstab is used by lxc-start when mounting the container. As such, you can define any mount that would be possible on the host such as bind mounting to the host's own filesystem. However, please be aware of any and all security implications that this may have.<br />
<br />
'''Warning''' : You certainly do not want to bind mount the host's /dev to the container as this would allow it to, amongst other things, reboot the host.<br />
<br />
==Container Creation and Destruction==<br />
<br />
===Creation===<br />
lxc-create -f $CONTAINER_CONFIGPATH -n $CONTAINER_NAME<br />
<br />
''lxc-create'' will create /var/lib/lxc/$CONTAINER_NAME with a new copy of the container configuration file found in $CONTAINER_CONFIGPATH.<br />
<br />
As such, if you need to make modifications to the container's configuration file, it's advisable to modify only the original file and then perform ''lxc-destroy'' and ''lxc-create'' operations afterwards. No data will be lost by doing this.<br />
<br />
'''Note''' : When copying the file over, lxc-create will strip all comments from the file.<br />
<br />
'''Note''' : As of lxc-git from atleast ''2009-12-01'', performing lxc-create no longer splits the config file into multiple files and folders. Therefore, we only have the configuration file to worry about.<br />
<br />
===Destruction===<br />
lxc-destroy -n $CONTAINER_NAME<br />
<br />
This will delete /var/lib/lxc/$CONTAINER_NAME which only contains configuration files. No data will be lost.<br />
<br />
==Readying the host for virtualization==<br />
===/etc/inittab===<br />
# Comment out any getty that are not required<br />
<br />
===/etc/rc.sysinit replacement===<br />
Since we are running in a virtual environment, a number of steps undertaken by rc.sysinit are superfluous and may even flat out fail or stall. As such, until the initscripts are made virtualization aware, this will take some hack and slash.<br />
<br />
For now, simply replace the file : <br />
#!/bin/bash<br />
# Whatever is needed to clean out old daemon/service pids from your container<br />
rm -f $(find /var/run -name '*pid')<br />
rm -f /var/lock/subsys/*<br><br />
# Configure network settings<br />
## You can either use dhcp here, manually configure your<br />
## interfaces or try to get the rc.d/network script working.<br />
## There have been reports that network failed in this<br />
## environment.<br />
ip route add default via 192.168.10.1<br />
echo > /etc/resolv.conf search your-domain<br />
echo >> /etc/resolv.conf nameserver 192.168.10.1<br><br />
# Initally we do not have any container originated mounts<br />
rm -f /etc/mtab<br />
touch /etc/mtab<br />
<br />
===/etc/rc.conf cleanup===<br />
You may want to remove any and all hardware related daemons from the DAEMONS line. Furthermore, depending on your situation, you may also want to remove the ''network'' daemon.<br />
<br />
===TBC===<br />
<br />
==Known Problems==<br />
<br />
===Using systemd inside a docker container results in a segfault===<br />
<br />
See [https://github.com/dotcloud/docker/issues/3629 docker github issue], launching /usr/lib/systemd/systemd --system results in a segfault, last tested with systemd 208-10.<br />
<br />
===Container cannot be shutdown if using systemd===<br />
''lxc-shutdown'' should be used for clean shutdown or reboot of the container, but only the ''reboot'' is working out of the box when using systemd.<br />
<br />
Shutdown will be signalled to the container with ''SIGPWR'' but current systemd doesn't have any services in place to handle the ''sigpwr.target''. But for the container we can simply reuse the ''poweroff.target'' and get exactly what we want.<br />
# ln -s /usr/lib/systemd/system/poweroff.target ${CONTAINER_RFS}/etc/systemd/system/sigpwr.target<br />
<br />
==See Also==<br />
*[[Arch systemd container]]<br />
*[http://www.ibm.com/developerworks/linux/library/l-lxc-containers/ LXC@developerWorks]<br />
*[http://docs.docker.io/en/latest/installation/archlinux/ Docker Installation on ArchLinux]</div>Adonmhttps://wiki.archlinux.org/index.php?title=Linux_Containers&diff=293854Linux Containers2014-01-21T18:31:16Z<p>Adonm: Added some details about current state of docker</p>
<hr />
<div>[[Category:Security]]<br />
[[Category:Virtualization]]<br />
{{Stub|Currently just a rough draft... I think I will need to restructure this a bit and I have also noticed I have become a bit too verbose -_-;; I will be along shortly to complete this as well as clean it up.}}<br />
<br />
==Introduction==<br />
<br />
===Synopsis===<br />
<br />
Linux Containers (LXC) are an operating system-level virtualization method for running multiple isolated server installs (containers) on a single control host. LXC does not provide a virtual machine, but rather provides a virtual environment that has its own process and network space. It is similar to a chroot, but offers much more isolation.<br />
<br />
===About this HowTo===<br />
<br />
This document is intended as an overview on setting up and deploying containers, and is not an in depth detailed instruction by instruction guide. A certain amount of prerequisite knowledge and skills are assumed (running commands as root, kernel configuration, mounting filesystems, shell scripting, chroot type environments, networking setup, etc).<br />
<br />
Much of this was taken verbatim from [http://lxc.teegra.net/ Dwight Schauer], [http://tuxce.selfip.org/informatique/conteneurs-linux-lxc Tuxce] and [http://artisan.karma-lab.net/node/1749 Ulhume]. It has been copied here both to enable to community to share their collective wisdom and to expand on a few points.<br />
<br />
===Less verbose tutorial===<br />
<br />
[[User:Delerious010|Delerious010]] 21:43, 1 December 2009 (EST) I have come to realize I have added a lot of text to this HowTo. If you would like something more streamlined, please head on over to [http://lxc.teegra.net/ http://lxc.teegra.net/] for Dwight's excellent guide.<br />
<br />
===Testing capabilities===<br />
<br />
Once the lxc package is installed, running lxc-checkconfig will print out a list of your system's capabilities<br />
<br />
==Host configuration==<br />
<br />
===Control group filesystem===<br />
<br />
LXC depends on the control group filesystem being mounted. The standard location for it is {{ic|/sys/fs/cgroup}}. If you use systemd, the cgroup filesystem will be mounted automatically, including the default controllers, but with other initsystems you might have to do it yourself:<br />
<br />
mount -t tmpfs none /sys/fs/cgroup<br />
<br />
===Userspace tools===<br />
<br />
Install {{Pkg|lxc}} from [community]. For networking, you will probably need {{Pkg|bridge-utils}} and {{Pkg|netctl}} or {{Pkg|openvpn}}.<br />
<br />
===Bridge device setup===<br />
<br />
The preferred way to setup a Bridge in Arch is with [[netctl]], and is explained in detail in the article: [[Bridge_with_netctl]]. In the config for your container, just specify the host interface as whatever you name your bridge (usually br0). You can find a skeleton implementation in {{ic|/etc/netctl/examples/bridge}}.<br />
<br />
Alternatively, you can use an [[OpenVPN Bridge]], which is useful if you are already familiar with or running it.<br />
<br />
===NAT device setup===<br />
<br />
If you don't have a device you can easily bridge (such as a wlan) you can instead NAT using [[netctl]] by using the same {{ic|/etc/netctl/examples/bridge}} with the following changes:<br />
<br />
BindsToInterfaces=()<br />
IP=static<br />
Address=192.168.100.1/24<br />
FwdDelay=0<br />
<br />
Remember to copy the example to {{ic|/etc/netctl}} and name it whatever you want. You can use any address range and subnet mask you want for the interface (make sure is one you are not already using). Once this interface is up with netctl start <profile> you need to have [[iptables]] put your external interface in masquerade and you need to enable ip forwarding with [[sysctl]]:<br />
<br />
iptables -t nat -A POSTROUTING -o <external interface such as eth0 or wlan0> -j MASQUERADE<br />
sysctl net.ipv4.ip_forward=1<br />
<br />
To have the nat prepared at boot, and to save the iptables and sysctl states:<br />
<br />
netctl enable <profile><br />
iptables-save > /etc/iptables/iptables.rules<br />
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.d/40-ip-forward.conf<br />
<br />
In your container config file, you will need to assign an IP address:<br />
<br />
lxc.network.ipv4 = 192.168.100.2/24<br />
<br />
When you enter your container, you must set the default gateway to the netctl address, which in this example was 192.168.100.1. In any container including {{Pkg|ip}} the following command will work:<br />
<br />
ip route add default via 192.168.100.1<br />
<br />
Or on distros such as Ubuntu that use /etc/network:<br />
<br />
{{hc|/etc/network/if-up.d/routes|<br />
#! /bin/sh<br />
route add default gw 192.168.100.1<br />
exit 0}}<br />
<br />
===Starting a container on boot with [[Systemd]]===<br />
<br />
If you completed a container, starting it when the host boots is possible with the following systemd service template:<br />
<br />
{{bc|1=<br />
[Unit]<br />
Description=Linux Container %i<br />
After=network.target<br />
<br />
[Service]<br />
Type=forking<br />
ExecStartPre=/bin/mount --make-rprivate /<br />
ExecStart=/usr/bin/lxc-start -dn %i<br />
ExecStop=/usr/bin/lxc-stop -n %i<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
Save this file as {{ic|/etc/systemd/system/lxc@.service}}. Then you can register it with this command:<br />
<br />
systemctl enable lxc@CONTAINER_NAME.service<br />
<br />
==Container setup==<br />
<br />
'''Note''' Configuring a container that runs systemd requires specific configuration that is discussed [[lxc-systemd|here]].<br />
<br />
There are various different means to do this<br />
<br />
===Creating the filesystem===<br />
<br />
====Bootstrap====<br />
Bootstrap an install ( [http://blog.mudy.info/tag/mkarchroot/ mkarchroot], [http://wiki.debian.org/Debootstrap debootstrap], [http://www.xen-tools.org/software/rinse/faq.html rinse], [[Install From Existing Linux]] ). You can also just copy/use an existing installation’s complete root filesystem.<br />
<br />
For example, install a small debian to /home/lxc/debianfs<br />
<br />
yaourt -S debootstrap # install debootstrap from AUR<br />
<br />
# method 1:<br />
sudo debootstrap wheezy /home/lxc/debianfst http://ftp.us.debian.org/debian # use us mirror site install wheezy version<br />
# or, method 2: use faster tar ball method<br />
sudo debootstrap --make-tarball wheezy.packages.tgz sid http://debian.osuosl.org/debian/<br />
sudo debootstrap --unpack-tarball wheezy.packages.tgz wheezy debianfs<br />
<br />
====Download existing====<br />
You can download a base install tar ball. OpenVZ templates work just fine.<br />
<br />
====Using the lxc tools====<br />
/usr/bin/lxc-debian {create|destroy|purge|help}<br />
/usr/bin/lxc-fedora {create|destroy|purge|help}<br />
<br />
Nowadays you can create small and simple archlinux container<br />
# lxc-create -n containername -t archlinux -- -P vim,dhclient<br />
<br />
with the template specific options ''-P'' you can add a list of packages to the installation.<br />
<br />
===Creating the device nodes===<br />
Since [[udev]] does not work within the container, you will want to make sure that a certain minimum amount of devices is created for it. This may be done with the following script: <br />
#!/bin/bash<br />
ROOT=$(pwd)<br />
DEV=${ROOT}/dev<br />
mv ${DEV} ${DEV}.old<br />
mkdir -p ${DEV}<br />
mknod -m 666 ${DEV}/null c 1 3<br />
mknod -m 666 ${DEV}/zero c 1 5<br />
mknod -m 666 ${DEV}/random c 1 8<br />
mknod -m 666 ${DEV}/urandom c 1 9<br />
mkdir -m 755 ${DEV}/pts<br />
mkdir -m 1777 ${DEV}/shm<br />
mknod -m 666 ${DEV}/tty c 5 0<br />
mknod -m 600 ${DEV}/console c 5 1<br />
mknod -m 666 ${DEV}/tty0 c 4 0<br />
mknod -m 666 ${DEV}/full c 1 7<br />
mknod -m 600 ${DEV}/initctl p<br />
mknod -m 666 ${DEV}/ptmx c 5 2<br />
<br />
==Container configuration==<br />
<br />
===Configuration file===<br />
<br />
The main configuration files are used to describe how to originally create a container. Though these files may be located anywhere, /etc/lxc is probably a good place.<br />
<br />
'''23/Aug/2010: Be aware that the kernel may not handle additional whitespace in the configuration file. This has been experienced on "lxc.cgroup.devices.allow" settings but may also be true on other settings. If in doubt use only one space wherever whitespace is required.'''<br />
<br />
====Basic settings====<br />
<br />
lxc.utsname = $CONTAINER_NAME<br><br />
lxc.mount = $CONTAINER_FSTAB<br />
lxc.rootfs = $CONTAINER_ROOTFS<br><br />
lxc.network.type = veth<br />
lxc.network.flags = up<br />
lxc.network.link = br0<br />
lxc.network.hwaddr = $CONTAINER_MACADDR <br />
lxc.network.ipv4 = $CONTAINER_IPADDR<br />
lxc.network.name = $CONTAINER_DEVICENAME<br />
<br />
=====Basic settings explained=====<br />
<br />
'''lxc.utsname''' : This will be the name of the cgroup for the container. Once the container is started, you should be able to see a new folder named ''/cgroup/$CONTAINER_NAME''.<br />
<br />
Furthermore, this will also be the value returned by ''hostname'' from within the container. Assuming you have not removed access, the container may overwrite this with it's init script.<br />
<br />
'''lxc.mount''' : This points to an fstab formatted file that is a listing of the mount points used when ''lxc-start'' is called. This file is further explained [[#Configuring fstab|further]]<br />
<br />
====Terminal settings====<br />
<br />
The following configuration is optional. You may add them to your main configuration file if you wish to login via lxc-console, or through a terminal ( e.g.: {{ic|Ctrl+Alt+F1}} ).<br />
<br />
The container can be configured with virtual consoles (tty devices). These may be devices from the host that the container is given permission to use (by its configuration file) or they may be devices created locally within the container.<br />
<br />
The host's virtual consoles are accessed using the key sequence {{ic|Alt+Fn}} (or {{ic|Ctrl+Alt+Fn}} from within an X11 session). The left {{ic|Alt}} key reaches consoles 1 through 12 and the right {{ic|Alt}} key reaches consoles 13 through 24. Further virtual consoles may be reached by the {{ic|Alt+→}} key sequence which steps to the next virtual console.<br />
<br />
The container's local virtual consoles may be accessed using the "lxc-console" command.<br />
<br />
===== Host Virtual Consoles =====<br />
<br />
The container may access the host's virtual consoles if the host is not using them and the container's configuration allows it. Typical container configuration would deny access to all devices and then allow access to specific devices like this:<br />
<br />
lxc.cgroup.devices.deny = a # Deny all access to devices<br />
lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0<br />
lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1<br />
lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2<br />
<br />
For a container to be able to use a host's virtual console it must not be in use by the host. This will most likely require the host's {{ic|/etc/inittab}} to be modified to ensure no getty or other process runs on any virtual console that is to be used by the container.<br />
<br />
After editing the host's {{ic|/etc/inittab}} file, issung a {{ic|killall -HUP init}} will terminate any getty processes that are no longer configured and this will free up the virtual conosole for use by the container.<br />
<br />
Note that local virtual consoles take precedence over host virtual consoles. This is described in the next section.<br />
<br />
===== Local Virtual Consoles =====<br />
<br />
The number of local virtual consoles that the container has is defined in the container's configuration file (normally on the host in {{ic|/etc/lxc}}). It is defined thus:<br />
<br />
lxc.tty = n<br />
<br />
where {{ic|n}} is the number of local virtual consoles required.<br />
<br />
The local virtual consoles are numbered starting at tty1 and take precedence over any of the host's virtual consoles that the container might be entitled to use. This means that, for example, if n = 2 then the container will not be able to use the host's tty1 and tty2 devices even entitled to do so by its configuration file. Setting n to 0 will prevent local virtual consoles from being created thus allowing full access to any of host's virtual consoles that the container might be entitled to use.<br />
<br />
===== /dev/tty Device Files =====<br />
The container must have a tty device file (e.g. {{ic|/dev/tty1}}) for each virtual console (host or local). These can be created thus:<br />
# mknod -m 666 /dev/tty1 c 4 1<br />
# mknod -m 666 /dev/tty2 c 4 2<br />
<br />
and so on...<br />
<br />
In the above, {{ic|c}} means character device, {{ic|4}} is the major device number (tty devices) and {{ic|1}}, {{ic|2}}, {{ic|3}}, etc., is the minor device number (specific tty device). Note that {{ic|/dev/tty0}} is special and always refers to the current virtual console.<br />
<br />
For further info on tty devices, read this: http://www.kernel.org/pub/linux/docs/device-list/devices.txt<br />
<br />
'''If a virtual console's device file does not exist in the container, then the container cannot use the virtual console.'''<br />
<br />
===== Configuring Log-In Ability =====<br />
<br />
The container's virtual consoles may be used for login sessions if the container runs "getty" services on their tty devices. This is normally done by the container's "init" process and is configured in the container's {{ic|/etc/inittab}} file using lines like this:<br />
<br />
c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
<br />
There is one line per device. The first part {{ic|c1}} is just a unique label, the second part defines applicable run levels, the third part tells init to start a new getty when the current one terminates and the last part gives the command line for the getty. For further information refer to {{ic|man init}}.<br />
<br />
If there is no getty process on a virtual console it will not be possible to log in via that virtual console. A getty is not required on a virtual console unless it is to be used to log in.<br />
<br />
If a virtual console is to allow root logins it also needs to be listed in the container's {{ic|/etc/securetty}} file.<br />
<br />
===== Troubleshooting virtual consoles =====<br />
<br />
If lxc.tty is set to a number, n, then no host devices numbered n or below will be accessible even if the above configuration is present because they will be replaced with local virtual consoles instead.<br />
<br />
A tty device file's major number will change from 4 to 136 if it is a local virtual console. This change is visible within the container but not when viewing the container's devices from the host's filesystem. This information is useful when troubleshooting.<br />
<br />
This can be checked from within a container thus:<br />
<br />
# ls -Al /dev/tty*<br />
crw------- 1 root root 136, 10 Aug 21 21:28 /dev/tty1<br />
crw------- 1 root root 4, 2 Aug 21 21:28 /dev/tty2<br />
<br />
===== Pseudo Terminals =====<br />
<br />
lxc.pseudo = 1024<br />
<br />
Maximum amount of pseudo terminals that may be created in {{ic|/dev/pts}}. Currently, assuming the kernel was compiled with {{ic|CONFIG_DEVPTS_MULTIPLE_INSTANCES}}, this tells lxc-start to mount the devpts filesystem with the newinstance flag.<br />
<br />
====Host device access settings====<br />
<br />
lxc.cgroup.devices.deny = a # Deny all access to devices<br><br />
lxc.cgroup.devices.allow = c 1:3 rwm # dev/null<br />
lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero<br><br />
lxc.cgroup.devices.allow = c 5:1 rwm # dev/console<br />
lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty<br />
lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0<br><br />
lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandom<br />
lxc.cgroup.devices.allow = c 1:8 rwm # dev/random<br />
lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/*<br />
lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx<br><br />
# No idea what this is .. dev/bsg/0:0:0:0 ???<br />
lxc.cgroup.devices.allow = c 254:0 rwm<br />
<br />
=====Host device access settings explained=====<br />
<br />
'''lxc.cgroup.devices.deny''' : By settings this to ''a'', we are stating that the container has access to no devices unless explicitely defined within the configuration file.<br />
<br />
===Configuration file notes===<br />
====At runtime /dev/ttyX devices are recreated====<br />
If you have enabled multiple DevPTS instances in your kernel, lxc-start will recreate ''lxc.tty'' amount of {{ic|/dev/ttyX}} devices when it is executed.<br />
<br />
This means that you will have ''lxc.tty'' amount of pseudo ttys. If you are planning on accessing the container via a "real" terminal ({{ic|Ctrl+Alt+FX}}), make sure that it is a number that is inferior to ''lxc.tty''.<br />
<br />
To tell whether it has been re-created, just log in to the container via either lxc-console or SSH and perform a {{ic|ls -Al}} command on the tty. Devices with a major number of 4 are "real" tty devices whereas a major number of 136 indicates a pts.<br />
<br />
Be aware that this is only visible from within the container itself and not from the host.<br />
<br />
====Containers have access to host's TTY nodes====<br />
<br />
If you do not properly restrict the container's access to the /dev/tty nodes, the container may have access to the host's.<br />
<br />
Taking into consideration that, as previously mentioned, lxc-start recreates ''lxc.tty'' amount of /dev/tty devices, any tty nodes present in the container that are of a greater minor number than ''lxc.tty'' will be linked to the host's.<br />
<br />
=====To access the container from a host TTY=====<br />
<br />
# On the host, verify no getty is started for that tty by checking ''/etc/inittab''.<br />
# In the container, start a getty for that tty.<br />
<br />
=====To prevent access to the host TTY=====<br />
<br />
Please have a look at the configuration statements found in [[#Host device access settings|host device access settings]].<br />
<br />
Via the ''lxc.cgroup.devices.deny = a'' we are preventing access to all host level devices. And then, throuh ''lxc.cgroup.devices.allow = c 4:'''1''' rwm'' we are allowing access to the host's /dev/tty'''1'''. In the above example, simply removing all allow statements for major number 4 and minor > 1 should be sufficient.<br />
<br />
=====To test this access=====<br />
<br />
I may be off here, but looking at the output of the ''ls'' command below should show you both the ''major'' and ''minor'' device numbers. These are located after the user and group and represented as : 4, 2<br />
<br />
# Set lxc.tty to 1<br />
# Make there that the container has dev/tty1 and /dev/tty2<br />
# ''lxc-start'' the container<br />
# ''lxc-console'' into the container<br />
# ''ls -Al /dev/tty''<br>crw------- 1 root root 4, 2 Dec 2 00:20 /dev/tty2<br />
# ''echo "test output" > /dev/tty2''<br />
# ''Ctrl+Alt+F2'' to view the host's second terminal<br />
# You should see "test output" printed on the screen<br />
<br />
====Configuration troubleshooting====<br />
<br />
=====console access denied: Permission denied=====<br />
<br />
If, when executing lxc-console, you receive the error ''lxc-console: console access denied: Permission denied'' you have most likely either omitted lxc.tty or set it to 0.<br />
<br />
=====lxc-console does not provide a login prompt=====<br />
<br />
Though you are reaching a tty on the container, it most likely is not running a getty. You will want to double check that you have a getty defined in the container's ''/etc/inittab'' for the specific tty.<br />
<br />
If using '''systemd''' chances are that a problem with the ''getty@.service'' script will bite you. The script only starts a getty if ''/dev/tty0'' exists. And since this condition is not met in the container, you get no getty. Use this patch, to let ''lxc-console'' finally work.<br />
<br />
<pre><br />
--- /usr/lib/systemd/system/getty@.service.orig 2013-05-30 12:55:28.000000000 +0000<br />
+++ /usr/lib/systemd/system/getty@.service 2013-06-16 23:05:49.827146901 +0000<br />
@@ -20,7 +20,8 @@<br />
# On systems without virtual consoles, don't start any getty. (Note<br />
# that serial gettys are covered by serial-getty@.service, not this<br />
# unit<br />
-ConditionPathExists=/dev/tty0<br />
+ConditionVirtualization=|lxc<br />
+ConditionPathExists=|/dev/tty0<br />
<br />
[Service]<br />
# the VT is cleared by TTYVTDisallocate<br />
</pre><br />
<br />
For more than one getty you have to explicitly enable the needed service (and decrease ''lxc.tty'' in the container configuration). In the ''real'' system a configurable number of getty-services is automatically created from the ''systemd-logind.service''<br />
<br />
===Configuring fstab===<br />
none $CONTAINER_ROOTFS/dev/pts devpts defaults 0 0<br />
none $CONTAINER_ROOTFS/proc proc defaults 0 0<br />
none $CONTAINER_ROOTFS/sys sysfs defaults 0 0<br />
none $CONTAINER_ROOTFS/dev/shm tmpfs defaults 0 0<br />
<br />
This fstab is used by lxc-start when mounting the container. As such, you can define any mount that would be possible on the host such as bind mounting to the host's own filesystem. However, please be aware of any and all security implications that this may have.<br />
<br />
'''Warning''' : You certainly do not want to bind mount the host's /dev to the container as this would allow it to, amongst other things, reboot the host.<br />
<br />
==Container Creation and Destruction==<br />
<br />
===Creation===<br />
lxc-create -f $CONTAINER_CONFIGPATH -n $CONTAINER_NAME<br />
<br />
''lxc-create'' will create /var/lib/lxc/$CONTAINER_NAME with a new copy of the container configuration file found in $CONTAINER_CONFIGPATH.<br />
<br />
As such, if you need to make modifications to the container's configuration file, it's advisable to modify only the original file and then perform ''lxc-destroy'' and ''lxc-create'' operations afterwards. No data will be lost by doing this.<br />
<br />
'''Note''' : When copying the file over, lxc-create will strip all comments from the file.<br />
<br />
'''Note''' : As of lxc-git from atleast ''2009-12-01'', performing lxc-create no longer splits the config file into multiple files and folders. Therefore, we only have the configuration file to worry about.<br />
<br />
===Destruction===<br />
lxc-destroy -n $CONTAINER_NAME<br />
<br />
This will delete /var/lib/lxc/$CONTAINER_NAME which only contains configuration files. No data will be lost.<br />
<br />
==Readying the host for virtualization==<br />
===/etc/inittab===<br />
# Comment out any getty that are not required<br />
<br />
===/etc/rc.sysinit replacement===<br />
Since we are running in a virtual environment, a number of steps undertaken by rc.sysinit are superfluous and may even flat out fail or stall. As such, until the initscripts are made virtualization aware, this will take some hack and slash.<br />
<br />
For now, simply replace the file : <br />
#!/bin/bash<br />
# Whatever is needed to clean out old daemon/service pids from your container<br />
rm -f $(find /var/run -name '*pid')<br />
rm -f /var/lock/subsys/*<br><br />
# Configure network settings<br />
## You can either use dhcp here, manually configure your<br />
## interfaces or try to get the rc.d/network script working.<br />
## There have been reports that network failed in this<br />
## environment.<br />
ip route add default via 192.168.10.1<br />
echo > /etc/resolv.conf search your-domain<br />
echo >> /etc/resolv.conf nameserver 192.168.10.1<br><br />
# Initally we do not have any container originated mounts<br />
rm -f /etc/mtab<br />
touch /etc/mtab<br />
<br />
===/etc/rc.conf cleanup===<br />
You may want to remove any and all hardware related daemons from the DAEMONS line. Furthermore, depending on your situation, you may also want to remove the ''network'' daemon.<br />
<br />
===TBC===<br />
<br />
==Known Problems==<br />
<br />
===Using systemd inside a docker container results in a segfault===<br />
<br />
See [https://github.com/dotcloud/docker/issues/3629 docker github issue], launching /usr/lib/systemd/systemd --system results in a segfault, last tested with systemd 208-10.<br />
<br />
===Container cannot be shutdown if using systemd===<br />
''lxc-shutdown'' should be used for clean shutdown or reboot of the container, but only the ''reboot'' is working out of the box when using systemd.<br />
<br />
Shutdown will be signalled to the container with ''SIGPWR'' but current systemd doesn't have any services in place to handle the ''sigpwr.target''. But for the container we can simply reuse the ''poweroff.target'' and get exactly what we want.<br />
# ln -s /usr/lib/systemd/system/poweroff.target ${CONTAINER_RFS}/etc/systemd/system/sigpwr.target<br />
<br />
==See Also==<br />
*[[Arch systemd container]]<br />
*[http://www.ibm.com/developerworks/linux/library/l-lxc-containers/ LXC@developerWorks]<br />
*[http://docs.docker.io/en/latest/installation/archlinux/ Docker Installation on ArchLinux]</div>Adonm