User:Monty programador/Linux Containers

From ArchWiki

[[Category:Security]] [[Category:Virtualization]] [[pt:Linux Containers]] {{Out of date|Some parts of this are dated and are being rewritten to offer up-to-date information regarding LXC setup on Arch Linux}} {{Poor writing|Several [[Help:Style]] issues.}}

LinuX Containers (LXC) is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host (LXC host).

LXC does not provide a virtual machine, but rather provides a virtual environment that has its own CPU, memory, block I/O, network etc. space. This is provided by cgroups features in Linux kernel on LXC host. It is similar to a chroot, but offers much more isolation.

This document is intended as an overview on setting up and deploying containers. A certain amount of prerequisite knowledge and skills is required (networking setup, running commands as root, installing packages from AUR, kernel configuration, mounting filesystems etc.).

Setup

Virtualization features for LXC Containers are provided by Linux Kernel and LXC Userspace tools. This section will cover basic information on how to setup LXC capable system.

Packages

The lxc package is available in the official repositories. It provides LXC Userspace tools which are used to manage LXC containers on LXC Host. Install the lxc package from official repositories.

It is also highly recommended to install bridge-utils and netctl which will be useful when configuring different network virtualization types. See also Bridge with netctl.

You can also optionally install OpenVPN, see also OpenVPN Bridge.

LXC depends on the control group filesystem being mounted. The standard location for it is /sys/fs/cgroup. The cgroup filesystem is mounted automatically by systemd.

Depending on which Linux OS you want to install on your container, you might need to install additional packages which are used in container templates. If you plan to create Arch Linux containers, installing arch-install-scripts from the official repositories is enough.

To install other OS containers, you need these OS specific packages:

Testing Setup

Once the lxc package is installed, running lxc-checkconfig will print out a list of your system's capabilities. For correctly configured system the output should be similar to:

$ lxc-checkconfig
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: missing
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

If, however lxc-checkconfig command is showing missing components, that would usually mean that your kernel is not properly configured for full LXC support. linux kernel package from official repositories has LXC support (but not for unprivileged users, until the CONFIG_USER_NS option is enabled in the kernel). You can check kernel's LXC configuration before actually booting the kernel by setting CONFIG environment variable to your kernel's config:

$ CONFIG=/path/to/kernel/config /usr/bin/lxc-checkconfig

Container setup

This section will provide information on how to install various containers. You can find all available templates that come with LXC in /usr/share/lxc/templates directory:

$ ls /usr/share/lxc/templates
lxc-alpine  lxc-altlinux  lxc-archlinux  lxc-busybox  lxc-centos  lxc-cirros  lxc-debian  lxc-download  lxc-fedora  lxc-gentoo  lxc-openmandriva  lxc-opensuse  lxc-oracle  lxc-plamo  lxc-sshd  lxc-ubuntu  lxc-ubuntu-cloud

These template files are bash scripts which build LXC container. Before creating LXC container using specific template, you need to make sure that you have all the packages installed which are required to build the container. You can find required packages for popular containers below:

* Arch Linux - arch-install-scripts
* Debian - debootstrapAUR
* Centos - yumAUR
* ...

Create Container

To create containers we will use lxc-create command and specify the template. Templates can also be provided special arguments which usually allow you to install specific release. Examples:

$ lxc-create -n CONTAINER_NAME -t TEMPLATE
$ lxc-create -n CONTAINER_NAME -t TEMPLATE -- -r RELEASE

Containers are stored in /var/lib/lxc/CONTAINER_NAME directory. The main configuration file is /var/lib/lxc/CONTAINER_NAME/config and root filesystem under /var/lib/lxc/CONTAINER_NAME/rootfs

See the section Containers below for information on setting up OS-specific LXC containers.

If you are using Btrfs you can append -B btrfs to lxc-create command if you want LXC to make a Btrfs subvolume for storing LXC Containers rootfs. This comes in handy when you want to clone containers with the help of lxc-clone command. It will make cloning and cloning from snapshots use Btrfs features:

$ lxc-create -n CONTAINER_NAME -t TEMPLATE -B btrfs

Also it is worth noting that during creation of some of the containers the setup generates private/GPG keys for OS package managers etc., so it is important that your random devices are properly seeded with random data. Otherwise, this can sometimes hang setup process while it is waiting for random data to be seeded. For this, you can install haveged package and run haveged command to seed /dev/random before issuing lxc-create command.

List Containers

You can list all installed LXC containers with the help of lxc-ls command:

$ lxc-ls

You can also provide -f argument to get more detailed output:

$ lxc-ls -f

Start Container

After container is created you can start it via lxc-start command:

$ lxc-start -n CONTAINER_NAME

This will output all the boot messages in current terminal and ask you to login. You can login and use the container. Once you are done, you will have to issue halt command to shut it down.

Most of the time, you will want to start LXC container in the background and then use `lxc-attach` command to login to the container. To start LXC container in the background:

$ lxc-start -n CONTAINER_NAME -d

Attach to the Container

To attach to the running LXC container in the background:

$ lxc-attach -n CONTAINER_NAME

Stop Container

To stop LXC container:

$lxc-stop -n CONTAINER_NAME

Starting containers on Boot

You can make LXC containers start on boot by activating container specific systemd service:

systemctl enable lxc@CONTAINER_NAME.service

Network Configuration

This section provides information on required network configuration on LXC host before you create LXC containers.

LXC containers support different virtual network types (see #Virtual Network Types below). For most virtual networking types to work you will need to configure a bridge device on your host. LXC expects br0 interface available during creation of some containers, it will also be used in the examples below with veth networking. Here are two ways to setup a Bridge in Arch, with brctl and with Netctl.

brctl

Make sure you have the bridge-utils package installed:

# pacman -S bridge-utils

Create the br0 interface:

# brctl addbr br0
# ifconfig br0 10.0.0.1/24
# ifconfig br0

Then change the networking section of the container's config file to look like this, you can pick an ip different than 10.0.0.100 if you wish, make sure you do for running more than one machine simultaneously:

#networking
lxc.network.type=veth
lxc.network.link=br0
lxc.network.ipv4=10.0.0.100
lxc.network.ipv4.gateway=10.0.0.1
lxc.network.flags=up
lxc.network.name=eth0
lxc.network.mtu=1500

Start the container and make sure you can ping the host computer:

$ ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms
...

Stop the container. To share internet with the host you need to allow iptables to route your requests outside the bridge.

# make sure you RUN THIS FROM THE HOST NOT INSIDE THE CONTAINER
$ iptables -t nat -A POSTROUTING -s 10.0.0.100 -o eth0 -j MASQUERADE
$ sysctl net.ipv4.ip_forward=1

Here we assume that your internet adapter is named eth0, it also works with wifi adapters. Run $ ip addr to see which adapters are available to you. Start the container and ping some external site:

$ ping www.archlinux.org
PING gudrun.archlinux.org (66.211.214.131) 56(84) bytes of data.
64 bytes from gudrun.archlinux.org (66.211.214.131): icmp_seq=1 ttl=48 time=115 ms
...

You now have a shared internet connection inside the container.

Netctl

Make sure you have netctl package installed:

$ pacman -S netctl
Bridge (Simple)

You can setup an empty bridge if you do NOT need internet access in your LXC containers:

/etc/netctl/lxcbridge
Description="LXC Bridge"
Interface=br0
Connection=bridge
BindsToInterfaces=()
IP=static
Address=10.0.2.1/24
SkipForwardingDelay=yes

Enable lxcbridge and start it:

$ netctl enable lxcbridge
$ netctl start lxcbridge

Note: if you ever need to change configuration of netctl profile you need to reenable it by running netctl reenable lxcbridge for automatic service to pick up the changes. After re-enabling it run netctl restart lxcbridge. For more info consult Netctl page.

Bridge (Internet-shared)

If you need internet connection on your LXC containers or want them to be able to access the network LXC host is on from LXC containers - you can add network interfaces to lxcbridge. In the examples below we add and configure enp3s0 network interface to LXC bridge which has internet access:

Static IP

This example will bridge network interface eth0 and configure a static IP for the bridge:

/etc/netctl/lxcbridge
Description="LXC Bridge"
Interface=br0
Connection=bridge
BindsToInterfaces=(eth0)
IP=static
Address=10.0.2.1/24
SkipForwardingDelay=yes

After changes are made, make sure to re-enable and restart the bridge:

$ netctl reenable lxcbridge
$ netctl restart lxcbridge
DHCP

This example will bridge network interface eth0 and configure an IP via DHCP:

/etc/netctl/lxcbridge
Description="LXC Bridge"
Interface=br0
Connection=bridge
BindsToInterfaces=(eth0)
IP=dhcp
SkipForwardingDelay=yes

After changes are made, make sure to re-enable and restart the bridge:

$ netctl reenable lxcbridge
$ netctl restart lxcbridge
IP Forwarding

You will also have to enable IP Forwarding on LXC Host:

$ sysctl net.ipv4.ip_forward=1
net.ipv6.conf.default.forwarding=1
net.ipv6.conf.all.forwarding=1

To make changes persist upon reboot:

/etc/sysctl.d/40-ip-forward.conf
net.ipv4.ip_forward=1

And also apply this iptables rule (make sure you have iptables package installed):

$ iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

To make changes persist upon reboot:

$ iptables-save > /etc/iptables/iptables.rules
$ systemctl enable iptables
$ systemctl start iptables

Containers

This section will provide information on how to setup OS-specific LXC containers on Arch Linux host. This will only include information on how to create and configure the container. For information on starting containers, read the sections above.

Arch Linux Container

To create Arch Linux container execute this command:

$ lxc-create -n arch -t archlinux

If you get an error /usr/share/lxc/templates/lxc-archlinux: line 183: pacstrap: command not found when trying to install the lxc-archlinux container, make sure to install the package extra/arch-install-scripts and try again.

If you have Btrfs filesystem and you want LXC to create a separate subvolume for containers rootfs, append -B btrfs like this:

$lxc-create -n arch -t archlinux -B btrfs

Configuration for container should be similar to this:

/var/lib/lxc/arch/config
# Template used to create this container: /usr/share/lxc/templates/lxc-archlinux
# Parameters passed to the template:
# For additional config options, please look at lxc.conf(5)
lxc.utsname=arch
lxc.autodev=1
lxc.tty=1
lxc.pts=1024
lxc.mount=/var/lib/lxc/arch/fstab
lxc.cap.drop=sys_module mac_admin mac_override sys_time
lxc.kmsg=0
lxc.stopsignal=SIGRTMIN+4
#networking
lxc.network.type=veth
lxc.network.link=br0
lxc.network.flags=up
lxc.network.name=eth0
lxc.network.ipv4=10.0.2.2/24
lxc.network.ipv4.gateway=10.0.2.1
lxc.network.mtu=1500
#cgroups
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
lxc.cgroup.devices.allow = c 1:7 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.rootfs = /var/lib/lxc/arch/rootfs

Make sure that networking part is setup correctly to use previously setup bridge. IP and Gateway configuration is also important if you want networking to work properly on LXC container. You should be able to #Start Container now without needing further configuration.

This article or section is out of date.

Reason: Parts of the article below are mostly outdated and are pending rewrite. (Discuss in User talk:Monty programador/Linux Containers)

Container configuration

Configuration file

The main configuration files are used to describe how to originally create a container. Though these files may be located anywhere, /etc/lxc is probably a good place.

23/Aug/2010: Be aware that the kernel may not handle additional whitespace in the configuration file. This has been experienced on "lxc.cgroup.devices.allow" settings but may also be true on other settings. If in doubt use only one space wherever whitespace is required.

Basic settings

lxc.utsname = $CONTAINER_NAME
lxc.mount = $CONTAINER_FSTAB lxc.rootfs = $CONTAINER_ROOTFS
lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.hwaddr = $CONTAINER_MACADDR lxc.network.ipv4 = $CONTAINER_IPADDR lxc.network.name = $CONTAINER_DEVICENAME
Basic settings explained

lxc.utsname : This will be the name of the cgroup for the container. Once the container is started, you should be able to see a new folder named /cgroup/$CONTAINER_NAME.

Furthermore, this will also be the value returned by hostname from within the container. Assuming you have not removed access, the container may overwrite this with its init script.

lxc.mount : This points to an fstab formatted file that is a listing of the mount points used when lxc-start is called. This file is explained further.

Virtual Network Types

LXC containers support the following networking types:

  • empty - creates only loopback interface and assigns it to the container.
  • veth - a virtual etherned device is created with one side assigned to the container and other side attached to a bridge on LXC host. If the bridge is not specified, then the veth pair device will be created but not attached to any bridge. Using veth with bridge is useful when you want to create virtual networks for LXC containers and LXC host.
  • macvlan - a macvlan interface is created and assigned to the container. macvlan interfaces can only communicate to other macvlan interfaces on the same LXC host. This is useful when you want to create different networks for different LXC containers and you do not need to access LXC containers from LXC host via network.
  • vlan - a vlan interface is linked with the interface specified in container's configuration and is assigned to a the container.
  • phys - an already existing interface is assigned to the container. This is useful when you want to assign a physical network interface to a LXC container.
  • none - will cause container to use host's network namespace.

It is possible to configure container with several network virtualization types at the same time. This wiki page will configure only one at a time for simplicity.

In your container config file, you will need to assign an IP address:

lxc.network.ipv4 = 192.168.100.2/24

You can also specify default gateway:

lxc.network.ipv4.gateway= 192.168.100.1

Cgroups device configuration

Cgroups allows you to decide which devices the container is allowed to use, to see which devices are available to the container by default you can run "ls -la":

# ls -la rootfs.dev/
....
crw------- 1 root root  5,   1 Sep 15 16:03 console
crw-rw-rw- 1 root root  1,   3 Sep 15 16:03 null
crw-rw-rw- 1 root root  1,   8 Sep 15 16:03 random
crw-rw-rw- 1 root root  5,   0 Sep 15 16:47 tty
-rw------- 1 root root       0 Sep 15 16:03 tty1
crw-rw-rw- 1 root root  1,   9 Sep 15 16:03 urandom
crw-rw-rw- 1 root root  1,   5 Sep 15 16:03 zero
...

This shows you the device numbers in the format "majorNumber, minorNumber", you need these numbers to edit the cgroups configuration, for example if you want allow access to /dev/null you can add this to the lxc container's config file:

# the "c" stands for character device
lxc.cgroup.devices.allow = c 1:3 rwm

as you can see the "ls -la" command will show you the numbers you need to allow or deny The rootfs.dev/ directory corresponds to the /dev directory inside the container, you can also run "ls -la /dev" outside the container to see which devices you have available outside the container.

Add non-default devices

See Lxc-systemd if you want to add a device not available by default. If you just want to temporarily add a certain device, for instance a loop device. First find the major and minor device numbers, in our case since we want a loop device we take 7:* since we want all the loop devices from 0 and on.

Then add this to the config file:

# lxc.cgroup.devices.allow = typeofdevice majornumber:minornumber rwm
lxc.cgroup.devices.allow = b 7:* rwm

Start the container, then run the following inside:

# check which devices are already busy
$ losetup -a
/dev/loop0: [65024]:1689590 (/mnt)
# pick a device not already in use like loop5, the b stands for byte device, a c for character device, see man mknod, the cgroup here is 7, 5 for loop device number 5
$ mknod /dev/loop5 -m0660 b 7 5
$ losetup -f
/dev/loop5

Your loop device is located in /dev/loop5 and is ready to be used!

Host device access settings
lxc.cgroup.devices.deny = a # Deny all access to devices
lxc.cgroup.devices.allow = c 1:3 rwm # dev/null lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero
lxc.cgroup.devices.allow = c 5:1 rwm # dev/console lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0
lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandom lxc.cgroup.devices.allow = c 1:8 rwm # dev/random lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/* lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx
# No idea what this is .. dev/bsg/0:0:0:0 ??? lxc.cgroup.devices.allow = c 254:0 rwm
Host device access settings explained

lxc.cgroup.devices.deny : By setting this to a, we are stating that the container has access to no devices unless explicitely defined within the configuration file.

Terminal settings

Sep/15/2014 - systemd will automatically log you into a tty when you run lxc-start, most of what follows is needed only if you want to run several TTYs simultaneously.

The following configuration is optional. You may add them to your main configuration file if you wish to login via lxc-console, or through a terminal ( e.g.: Ctrl+Alt+F1 ).

The container can be configured with virtual consoles (tty devices). These may be devices from the host that the container is given permission to use (by its configuration file) or they may be devices created locally within the container.

The host's virtual consoles are accessed using the key sequence Alt+Fn (or Ctrl+Alt+Fn from within an X11 session). The left Alt key reaches consoles 1 through 12 and the right Alt key reaches consoles 13 through 24. Further virtual consoles may be reached by the Alt+→ key sequence which steps to the next virtual console.

The container's local virtual consoles may be accessed using the "lxc-console" command.

Host Virtual Consoles

The container may access the host's virtual consoles if the host is not using them and the container's configuration allows it. Typical container configuration would deny access to all devices and then allow access to specific devices like this:

 lxc.cgroup.devices.deny = a          # Deny all access to devices
 lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0
 lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1
 lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2

For a container to be able to use a host's virtual console it must not be in use by the host. This will most likely require the host's /etc/inittab to be modified to ensure no getty or other process runs on any virtual console that is to be used by the container.

After editing the host's /etc/inittab file, issung a killall -HUP init will terminate any getty processes that are no longer configured and this will free up the virtual conosole for use by the container.

Note that local virtual consoles take precedence over host virtual consoles. This is described in the next section.

Local Virtual Consoles

The number of local virtual consoles that the container has is defined in the container's configuration file (normally on the host in /etc/lxc). It is defined thus:

 lxc.tty = n

where n is the number of local virtual consoles required.

The local virtual consoles are numbered starting at tty1 and take precedence over any of the host's virtual consoles that the container might be entitled to use. This means that, for example, if n = 2 then the container will not be able to use the host's tty1 and tty2 devices even entitled to do so by its configuration file. Setting n to 0 will prevent local virtual consoles from being created thus allowing full access to any of host's virtual consoles that the container might be entitled to use.

/dev/tty Device Files

The container must have a tty device file (e.g. /dev/tty1) for each virtual console (host or local). These can be created thus:

# mknod -m 666 /dev/tty1 c 4 1
# mknod -m 666 /dev/tty2 c 4 2

and so on...

In the above, c means character device, 4 is the major device number (tty devices) and 1, 2, 3, etc., is the minor device number (specific tty device). Note that /dev/tty0 is special and always refers to the current virtual console.

For further info on tty devices, read this: https://www.kernel.org/doc/html/latest/admin-guide/devices.html#terminal-devices

If a virtual console's device file does not exist in the container, then the container cannot use the virtual console.

Configuring Log-In Ability

The container's virtual consoles may be used for login sessions if the container runs "getty" services on their tty devices. This is normally done by the container's "init" process and is configured in the container's /etc/inittab file using lines like this:

 c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux

There is one line per device. The first part c1 is just a unique label, the second part defines applicable run levels, the third part tells init to start a new getty when the current one terminates and the last part gives the command line for the getty. For further information refer to man init.

If there is no getty process on a virtual console it will not be possible to log in via that virtual console. A getty is not required on a virtual console unless it is to be used to log in.

If a virtual console is to allow root logins it also needs to be listed in the container's /etc/securetty file.

Troubleshooting virtual consoles

If lxc.tty is set to a number, n, then no host devices numbered n or below will be accessible even if the above configuration is present because they will be replaced with local virtual consoles instead.

A tty device file's major number will change from 4 to 136 if it is a local virtual console. This change is visible within the container but not when viewing the container's devices from the host's filesystem. This information is useful when troubleshooting.

This can be checked from within a container thus:

 # ls -Al /dev/tty*
 crw------- 1 root root 136, 10 Aug 21 21:28 /dev/tty1
 crw------- 1 root root   4, 2  Aug 21 21:28 /dev/tty2
Pseudo Terminals
 lxc.pseudo = 1024

Maximum amount of pseudo terminals that may be created in /dev/pts. Currently, assuming the kernel was compiled with CONFIG_DEVPTS_MULTIPLE_INSTANCES, this tells lxc-start to mount the devpts filesystem with the newinstance flag.

Configuration file notes

At runtime /dev/ttyX devices are recreated

If you have enabled multiple DevPTS instances in your kernel, lxc-start will recreate lxc.tty amount of /dev/ttyX devices when it is executed.

This means that you will have lxc.tty amount of pseudo ttys. If you are planning on accessing the container via a "real" terminal (Ctrl+Alt+FX), make sure that it is a number that is inferior to lxc.tty.

To tell whether it has been re-created, just log in to the container via either lxc-console or SSH and perform a ls -Al command on the tty. Devices with a major number of 4 are "real" tty devices whereas a major number of 136 indicates a pts.

Be aware that this is only visible from within the container itself and not from the host.

Containers have access to host's TTY nodes

If you do not properly restrict the container's access to the /dev/tty nodes, the container may have access to the host's.

Taking into consideration that, as previously mentioned, lxc-start recreates lxc.tty amount of /dev/tty devices, any tty nodes present in the container that are of a greater minor number than lxc.tty will be linked to the host's.

To access the container from a host TTY
  1. On the host, verify no getty is started for that tty by checking /etc/inittab.
  2. In the container, start a getty for that tty.
To prevent access to the host TTY

Please have a look at the configuration statements found in host device access settings.

Via the lxc.cgroup.devices.deny = a we are preventing access to all host level devices. And then, throuh lxc.cgroup.devices.allow = c 4:1 rwm we are allowing access to the host's /dev/tty1. In the above example, simply removing all allow statements for major number 4 and minor > 1 should be sufficient.

To test this access

I may be off here, but looking at the output of the ls command below should show you both the major and minor device numbers. These are located after the user and group and represented as : 4, 2

  1. Set lxc.tty to 1
  2. Make there that the container has dev/tty1 and /dev/tty2
  3. lxc-start the container
  4. lxc-console into the container
  5. ls -Al /dev/tty
    crw------- 1 root root 4, 2 Dec 2 00:20 /dev/tty2
  6. echo "test output" > /dev/tty2
  7. Ctrl+Alt+F2 to view the host's second terminal
  8. You should see "test output" printed on the screen

Configuration troubleshooting

console access denied: Permission denied

If, when executing lxc-console, you receive the error lxc-console: console access denied: Permission denied you have most likely either omitted lxc.tty or set it to 0.

lxc-console does not provide a login prompt

Though you are reaching a tty on the container, it most likely is not running a getty. You will want to double check that you have a getty defined in the container's /etc/inittab for the specific tty.

If using systemd chances are that a problem with the getty@.service script will bite you. The script only starts a getty if /dev/tty0 exists. And since this condition is not met in the container, you get no getty. Use this patch, to let lxc-console finally work.

--- /usr/lib/systemd/system/getty@.service.orig 2013-05-30 12:55:28.000000000 +0000
+++ /usr/lib/systemd/system/getty@.service      2013-06-16 23:05:49.827146901 +0000
@@ -20,7 +20,8 @@
 # On systems without virtual consoles, don't start any getty. (Note
 # that serial gettys are covered by serial-getty@.service, not this
 # unit
-ConditionPathExists=/dev/tty0
+ConditionVirtualization=|lxc
+ConditionPathExists=|/dev/tty0
 
 [Service]
 # the VT is cleared by TTYVTDisallocate

For more than one getty you have to explicitly enable the needed service (and decrease lxc.tty in the container configuration) by doing this:

# ln -sf /usr/lib/systemd/system/getty@.service /etc/systemd/system/getty.target.wants/getty@ttyX.service

The ttyX should be replaced by the tty you want to use such as tty2. In the real system a configurable number of getty-services is automatically created from the systemd-logind.service

Configuring fstab

none $CONTAINER_ROOTFS/dev/pts devpts defaults 0 0
none $CONTAINER_ROOTFS/proc    proc   defaults 0 0
none $CONTAINER_ROOTFS/sys     sysfs  defaults 0 0
none $CONTAINER_ROOTFS/dev/shm tmpfs  defaults 0 0

This fstab is used by lxc-start when mounting the container. As such, you can define any mount that would be possible on the host such as bind mounting to the host's own filesystem. However, please be aware of any and all security implications that this may have.

Warning : You certainly do not want to bind mount the host's /dev to the container as this would allow it to, amongst other things, reboot the host.

Troubleshooting

Container cannot be stopped when using systemd

lxc-stop should be used for clean shutdown or reboot of the container, but only the reboot is working out of the box when using systemd.

Shutdown will be signalled to the container with SIGPWR but current systemd does not have any services in place to handle the sigpwr.target. But for the container we can simply reuse the poweroff.target and get exactly what we want.

# ln -s /usr/lib/systemd/system/poweroff.target ${CONTAINER_RFS}/etc/systemd/system/sigpwr.target

Cannot use pacman from inside an LXC container instance

Attempting to use pacman inside an LXC environment entered via lxc-attach will result in the following error:

error: GPGME error: Inappropriate ioctl for device

To not have this error, make sure to use pacman inside an LXC environment entered via lxc-console only.

Container cann't find installed command

Maybe the language settins are a bit off also, and basic commands like ls or locale-gen doesn't work?

Check your $PATH content. In mixed environments like arch host and debian template, using lxc-attach results to wonky behaviour, as your host environment variables are used, but there is no match for them in the container. If you are planning to run commands with lxc-attach without logging into lxc-console first, make sure, that the environment is set correctly.

Starting container changes keymap of host computer

One solution is to create a wrapper around the start script that resets the keyboard layout every few seconds until the container has started:

# ./lxc-start-wrapper
sleep 3 && resetkeyboard 2>/dev/null &
sleep 5 && resetkeyboard 2>/dev/null &
sleep 6 && resetkeyboard 2>/dev/null &
sleep 7 && resetkeyboard 2>/dev/null &
sleep 9 && resetkeyboard 2>/dev/null &
#sleep 15 && resetkeyboard 2>/dev/null &
sudo lxc-start -n nameofcontainer
# ./resetkeyboard
setxkbmap us -print | xkbcomp - $DISPLAY
#setxkbmap dvorak -print | xkbcomp - $DISPLAY
# in case you have a custom keyboard layout located in $HOME/.xkb
# setxkbmap -I ~/.xkb nameofcustomlayout -print | xkbcomp -I$HOME/.xkb - $DISPLAY

See also