https://wiki.archlinux.org/api.php?action=feedcontributions&user=Joridos&feedformat=atomArchWiki - User contributions [en]2024-03-29T13:26:58ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=User:Joridos&diff=324008User:Joridos2014-07-08T03:32:32Z<p>Joridos: </p>
<hr />
<div>Hello i am Joridos :D<br />
<br />
my blog is [http://joridos.com]</div>Joridoshttps://wiki.archlinux.org/index.php?title=User:Joridos&diff=324007User:Joridos2014-07-08T03:31:56Z<p>Joridos: Created page with "Hello i am Joridos :D my blog is [joridos.com]"</p>
<hr />
<div>Hello i am Joridos :D<br />
<br />
my blog is [joridos.com]</div>Joridoshttps://wiki.archlinux.org/index.php?title=Linux_Containers_(Portugu%C3%AAs)&diff=324005Linux Containers (Português)2014-07-08T03:12:13Z<p>Joridos: Created page with "Category:Security Category:Virtualization {{Stub|Algumas partes deste são datadas e estão sendo reescrito para oferecer informações atualizadas sobre a configuraç..."</p>
<hr />
<div>[[Category:Security]]<br />
[[Category:Virtualization]]<br />
{{Stub|Algumas partes deste são datadas e estão sendo reescrito para oferecer informações atualizadas sobre a configuração LXC no Arch Linux}}<br />
[[en:Linux Containers]]<br />
[[pt:Linux Containers]]<br />
<br />
{{Related articles start}}<br />
{{Related|Arch systemd container}}<br />
{{Related|Docker}}<br />
{{Related|Lxc-systemd}}<br />
{{Related articles end}}<br />
<br />
'''LinuX Containers''' ('''LXC''') é um método de virtualização em nível de sistema operacional para executar vários sistemas Linux isolados (contêineres) em um único host de controle (LXC host). <br />
<br />
LXC não fornecer uma máquina virtual, mas fornece um ambiente virtual que tem sua própria CPU, memória, bloco I/O, rede, etc espaço. Este é fornecido por [[cgroups]] recursos no kernel do Linux no host LXC. É semelhante a um chroot, mas oferece muito mais isolamento.<br />
<br />
Este documento destina-se como uma visão geral sobre a configuração e implantação de contêineres. Uma certa quantidade de conhecimentos e habilidades pré-requisito é exigido (rede de instalação, executar comandos como root, instalar pacotes do [[AUR]], configuração do kernel, sistema de arquivos de montagem, etc.)</div>Joridoshttps://wiki.archlinux.org/index.php?title=Linux_Containers&diff=324004Linux Containers2014-07-08T03:02:21Z<p>Joridos: </p>
<hr />
<div>[[Category:Security]]<br />
[[Category:Virtualization]]<br />
{{Stub|Some parts of this are dated and are being rewritten to offer up-to-date information regarding LXC setup on Arch Linux}}<br />
<br />
[[pt:Linux Containers]]<br />
<br />
{{Related articles start}}<br />
{{Related|Arch systemd container}}<br />
{{Related|Docker}}<br />
{{Related|Lxc-systemd}}<br />
{{Related articles end}}<br />
<br />
'''LinuX Containers''' ('''LXC''') is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host (LXC host). <br />
<br />
LXC does not provide a virtual machine, but rather provides a virtual environment that has its own CPU, memory, block I/O, network etc. space. This is provided by [[cgroups]] features in Linux kernel on LXC host. It is similar to a chroot, but offers much more isolation.<br />
<br />
This document is intended as an overview on setting up and deploying containers. A certain amount of prerequisite knowledge and skills is required (networking setup, running commands as root, installing packages from [[AUR]], kernel configuration, mounting filesystems etc.).<br />
<br />
== Setup ==<br />
<br />
Virtualization features for LXC Containers are provided by Linux Kernel and LXC Userspace tools. This section will cover basic information on how to setup LXC capable system.<br />
<br />
=== Packages ===<br />
<br />
The {{Pkg|lxc}} package is available in the [[official repositories]]. It provides LXC Userspace tools which are used to manage LXC containers on LXC Host. Install the {{Pkg|lxc}} package from [[official repositories]].<br />
<br />
It is also highly recommended to install {{Pkg|bridge-utils}} and [[netctl]] which will be useful when configuring different network virtualization types. See also [[Bridge with netctl]].<br />
<br />
You can also optionally install [[OpenVPN]], see also [[OpenVPN Bridge]].<br />
<br />
LXC depends on the control group filesystem being mounted. The standard location for it is {{ic|/sys/fs/cgroup}}. The cgroup filesystem is mounted automatically by systemd.<br />
<br />
Depending on which Linux OS you want to install on your container, you might need to install additional packages which are used in container templates. If you plan to create Arch Linux containers, installing {{Pkg|arch-install-scripts}} from the [[official repositories]] is enough.<br />
<br />
To install other OS containers, you need these OS specific packages:<br />
* Debian: {{AUR|debootstrap}} package from [[AUR]].<br />
* CentOS: {{AUR|yum}} package from [[AUR]].<br />
<br />
=== Testing Setup ===<br />
<br />
Once the {{Pkg|lxc}} package is installed, running {{ic|lxc-checkconfig}} will print out a list of your system's capabilities. For correctly configured system the output should be similar to:<br />
<br />
{{bc|$ lxc-checkconfig<br />
--- Namespaces ---<br />
Namespaces: enabled<br />
Utsname namespace: enabled<br />
Ipc namespace: enabled<br />
Pid namespace: enabled<br />
User namespace: missing<br />
Network namespace: enabled<br />
Multiple /dev/pts instances: enabled<br />
<br />
--- Control groups ---<br />
Cgroup: enabled<br />
Cgroup clone_children flag: enabled<br />
Cgroup device: enabled<br />
Cgroup sched: enabled<br />
Cgroup cpu account: enabled<br />
Cgroup memory controller: enabled<br />
Cgroup cpuset: enabled<br />
<br />
--- Misc ---<br />
Veth pair device: enabled<br />
Macvlan: enabled<br />
Vlan: enabled<br />
File capabilities: enabled}}<br />
<br />
If, however {{ic|lxc-checkconfig}} command is showing missing components, that would usually mean that your kernel is not properly configured for full LXC support. {{Pkg|linux}} kernel package from [[official repositories]] has LXC support. You can check kernel's LXC configuration before actually booting the kernel by setting {{ic|CONFIG}} environment variable to your kernel's config:<br />
<br />
{{bc|1=$ CONFIG=/path/to/kernel/config /usr/bin/lxc-checkconfig}}<br />
<br />
=== Network Configuration ===<br />
<br />
This section provides information on required network configuration on LXC host before you create LXC containers.<br />
<br />
LXC containers support different virtual network types (see [[#Virtual Network Types]] below). For most virtual networking types to work you will need to configure a bridge device on your host. LXC expects '''br0''' interface available during creation of some containers, it will also be used in the examples below with '''veth''' networking. The preferred way to setup a Bridge in Arch is with [[Netctl]]. Make sure you have {{Pkg|netctl}} package installed:<br />
<br />
{{bc|$ pacman -Sy netctl}}<br />
<br />
==== Bridge (Simple) ====<br />
You can setup an empty bridge if you do NOT need internet access in your LXC containers:<br />
<br />
{{hc|1=/etc/netctl/lxcbridge|2=<br />
Description="LXC Bridge"<br />
Interface=br0<br />
Connection=bridge<br />
BindsToInterfaces=()<br />
IP=static<br />
Address=10.0.2.1/24<br />
FwdDelay=0}}<br />
<br />
Enable lxcbridge and start it:<br />
<br />
{{bc|$ netctl enable lxcbridge<br />
$ netctl start lxcbridge}}<br />
<br />
<br />
'''Note''': if you ever need to change configuration of netctl profile you need to '''reenable''' it by running {{ic|netctl reenable lxcbridge}} for automatic service to pick up the changes. After re-enabling it run {{ic|netctl restart lxcbridge}}. For more info consult [[Netctl]] page.<br />
<br />
==== Bridge (Internet-shared) ====<br />
<br />
If you need internet connection on your LXC containers or want them to be able to access the network LXC host is on from LXC containers - you can add network interfaces to lxcbridge. In the examples below we add and configure '''enp3s0''' network interface to LXC bridge which has internet access:<br />
<br />
===== Static IP =====<br />
<br />
This example will bridge network interface '''enp3s0''' and configure a static IP for the bridge:<br />
<br />
{{hc|1=/etc/netctl/lxcbridge|2=<br />
Description="LXC Bridge"<br />
Interface=br0<br />
Connection=bridge<br />
BindsToInterfaces=(enp3s0)<br />
IP=static<br />
Address=10.0.2.1/24<br />
FwdDelay=0}}<br />
<br />
After changes are made, make sure to re-enable and restart the bridge:<br />
<br />
{{bc|$ netctl reenable lxcbridge<br />
$ netctl restart lxcbridge}}<br />
<br />
===== DHCP =====<br />
<br />
This example will bridge network interface '''enp3s0''' and configure an IP via DHCP:<br />
<br />
{{hc|1=/etc/netctl/lxcbridge|2=<br />
Description="LXC Bridge"<br />
Interface=br0<br />
Connection=bridge<br />
BindsToInterfaces=(enp9s0)<br />
IP=dhcp<br />
FwdDelay=0}}<br />
<br />
After changes are made, make sure to re-enable and restart the bridge:<br />
<br />
{{bc|$ netctl reenable lxcbridge<br />
$ netctl restart lxcbridge}}<br />
<br />
===== IP Forwarding =====<br />
<br />
You will also have to enable IP Forwarding on LXC Host:<br />
<br />
{{bc|1=$ sysctl net.ipv4.ip_forward=1<br />
net.ipv6.conf.default.forwarding=1<br />
net.ipv6.conf.all.forwarding=1}}<br />
<br />
To make changes persist upon reboot:<br />
<br />
{{hc|1=/etc/sysctl.d/40-ip-forward.conf|2=<br />
net.ipv4.ip_forward=1}}<br />
<br />
And also apply this [[iptables]] rule (make sure you have {{Pkg|iptables}} package installed):<br />
<br />
{{bc|$ iptables -t nat -A POSTROUTING -o enp3s0 -j MASQUERADE}}<br />
<br />
To make changes persist upon reboot:<br />
<br />
{{bc|1=$ iptables-save > /etc/iptables/iptables.rules<br />
$ systemctl enable iptables<br />
$ systemctl start iptables}}<br />
<br />
== Container setup ==<br />
<br />
This section will provide information on how to install various containers. You can find all available templates that come with LXC in {{ic|/usr/share/lxc/templates}} directory:<br />
<br />
{{hc|$ ls /usr/share/lxc/templates|<br />
lxc-alpine lxc-altlinux lxc-archlinux lxc-busybox lxc-centos lxc-cirros lxc-debian lxc-download lxc-fedora lxc-gentoo lxc-openmandriva lxc-opensuse lxc-oracle lxc-plamo lxc-sshd lxc-ubuntu lxc-ubuntu-cloud}}<br />
<br />
These template files are bash scripts which build LXC container. Before creating LXC container using specific template, you need to make sure that you have all the packages installed which are required to build the container. You can find required packages for popular containers below:<br />
<br />
* '''Arch Linux''' - {{Pkg|arch-install-scripts}}<br />
* '''Debian''' - {{AUR|debootstrap}}<br />
* '''Centos''' - {{AUR|rpm}}<br />
* ...<br />
<br />
=== Create Container ===<br />
<br />
To create containers we will use {{ic|lxc-create}} command and specify the template. Templates can also be provided special arguments which usually allow you to install specific release. Examples:<br />
<br />
{{bc|$ lxc-create -n CONTAINER_NAME -t TEMPLATE<br />
$ lxc-create -n CONTAINER_NAME -t TEMPLATE -- -r RELEASE}}<br />
<br />
Containers are stored in {{ic|/var/lib/lxc/CONTAINER_NAME}} directory. The main configuration file is {{ic|/var/lib/lxc/CONTAINER_NAME/config}} and root filesystem under {{ic|/var/lib/lxc/CONTAINER_NAME/rootfs}}<br />
<br />
If you are using [[Btrfs]] you can append {{ic|-B btrfs}} to {{ic|lxc-create}} command if you want LXC to make a Btrfs subvolume for storing LXC Containers rootfs. This comes in handy when you want to clone containers with the help of {{ic|lxc-clone}} command. It will make cloning and cloning from snapshots use '''Btrfs''' features:<br />
<br />
{{bc|$ lxc-create -n CONTAINER_NAME -t TEMPLATE -B btrfs}}<br />
<br />
Also it is worth noting that during creation of some of the containers the setup generates private/GPG keys for OS package managers etc., so it is important that your random devices are properly seeded with random data. Otherwise, this can sometimes hang setup process while it is waiting for random data to be seeded. For this, you can install {{Pkg|haveged}} package and run {{ic|haveged}} command to seed {{ic|/dev/random}} before issuing {{ic|lxc-create}} command.<br />
<br />
=== List Containers ===<br />
<br />
You can list all installed LXC containers with the help of {{ic|lxc-ls}} command:<br />
<br />
{{bc|$ lxc-ls}}<br />
<br />
You can also provide {{ic|-f}} argument to get more detailed output:<br />
<br />
{{bc|$ lxc-ls -f}}<br />
<br />
=== Start Container ===<br />
<br />
After container is created you can start it via {{ic|lxc-start}} command:<br />
<br />
{{bc|$ lxc-start -n CONTAINER_NAME}}<br />
<br />
This will output all the boot messages in current terminal and ask you to login. You can login and use the container. Once you are done, you will have to issue '''halt''' command to shut it down.<br />
<br />
Most of the time, you will want to start LXC container in the background and then use `lxc-attach` command to login to the container. To start LXC container in the background:<br />
<br />
{{bc|$ lxc-start -n CONTAINER_NAME -d}}<br />
<br />
=== Attach to the Container ===<br />
<br />
To attach to the running LXC container in the background:<br />
<br />
{{bc|$ lxc-attach -n CONTAINER_NAME}}<br />
<br />
=== Stop Container ===<br />
<br />
To stop LXC container:<br />
<br />
{{bc|$lxc-stop -n CONTAINER_NAME}}<br />
<br />
=== Starting containers on Boot ===<br />
<br />
You can make LXC containers start on boot by activating container specific systemd service:<br />
<br />
systemctl enable lxc@CONTAINER_NAME.service<br />
<br />
== Containers ==<br />
<br />
This section will provide information on how to setup OS-specific LXC containers on Arch Linux host. This will only include information on how to create and configure the container. For information on starting containers, read the sections above.<br />
<br />
=== Arch Linux Container ===<br />
<br />
To create Arch Linux container execute this command:<br />
<br />
{{bc|$ lxc-create -n arch -t archlinux}}<br />
<br />
If you have [[Btrfs]] filesystem and you want LXC to create a separate subvolume for containers rootfs, append {{ic|-B btrfs}} like this:<br />
<br />
{{bc|$lxc-create -n arch -t archlinux -B btrfs}}<br />
<br />
Configuration for container should be similar to this:<br />
<br />
{{hc|1=/var/lib/lxc/arch/config|2=<br />
# Template used to create this container: /usr/share/lxc/templates/lxc-archlinux<br />
# Parameters passed to the template:<br />
# For additional config options, please look at lxc.conf(5)<br />
lxc.utsname=arch<br />
lxc.autodev=1<br />
lxc.tty=1<br />
lxc.pts=1024<br />
lxc.mount=/var/lib/lxc/arch/fstab<br />
lxc.cap.drop=sys_module mac_admin mac_override sys_time<br />
lxc.kmsg=0<br />
lxc.stopsignal=SIGRTMIN+4<br />
#networking<br />
lxc.network.type=veth<br />
lxc.network.link=br0<br />
lxc.network.flags=up<br />
lxc.network.name=eth0<br />
lxc.network.ipv4=10.0.2.2/24<br />
lxc.network.ipv4.gateway=10.0.2.1<br />
lxc.network.mtu=1500<br />
#cgroups<br />
lxc.cgroup.devices.deny = a<br />
lxc.cgroup.devices.allow = c *:* m<br />
lxc.cgroup.devices.allow = b *:* m<br />
lxc.cgroup.devices.allow = c 1:3 rwm<br />
lxc.cgroup.devices.allow = c 1:5 rwm<br />
lxc.cgroup.devices.allow = c 1:7 rwm<br />
lxc.cgroup.devices.allow = c 1:8 rwm<br />
lxc.cgroup.devices.allow = c 1:9 rwm<br />
lxc.cgroup.devices.allow = c 4:1 rwm<br />
lxc.cgroup.devices.allow = c 5:0 rwm<br />
lxc.cgroup.devices.allow = c 5:1 rwm<br />
lxc.cgroup.devices.allow = c 5:2 rwm<br />
lxc.cgroup.devices.allow = c 136:* rwm<br />
lxc.rootfs = /var/lib/lxc/arch/rootfs}}<br />
<br />
Make sure that '''networking''' part is setup correctly to use previously setup bridge. IP and Gateway configuration is also important if you want networking to work properly on LXC container. You should be able to [[#Start Container]] now without needing further configuration.<br />
<br />
{{Stub|Parts of the article below are mostly outdated and are pending rewrite.}}<br />
<br />
==Container configuration==<br />
<br />
===Configuration file===<br />
<br />
The main configuration files are used to describe how to originally create a container. Though these files may be located anywhere, /etc/lxc is probably a good place.<br />
<br />
'''23/Aug/2010: Be aware that the kernel may not handle additional whitespace in the configuration file. This has been experienced on "lxc.cgroup.devices.allow" settings but may also be true on other settings. If in doubt use only one space wherever whitespace is required.'''<br />
<br />
====Basic settings====<br />
<br />
lxc.utsname = $CONTAINER_NAME<br><br />
lxc.mount = $CONTAINER_FSTAB<br />
lxc.rootfs = $CONTAINER_ROOTFS<br><br />
lxc.network.type = veth<br />
lxc.network.flags = up<br />
lxc.network.link = br0<br />
lxc.network.hwaddr = $CONTAINER_MACADDR <br />
lxc.network.ipv4 = $CONTAINER_IPADDR<br />
lxc.network.name = $CONTAINER_DEVICENAME<br />
<br />
=====Basic settings explained=====<br />
<br />
'''lxc.utsname''' : This will be the name of the cgroup for the container. Once the container is started, you should be able to see a new folder named ''/cgroup/$CONTAINER_NAME''.<br />
<br />
Furthermore, this will also be the value returned by ''hostname'' from within the container. Assuming you have not removed access, the container may overwrite this with it's init script.<br />
<br />
'''lxc.mount''' : This points to an fstab formatted file that is a listing of the mount points used when ''lxc-start'' is called. This file is further explained [[#Configuring fstab|further]]<br />
<br />
==== Virtual Network Types ====<br />
<br />
LXC containers support the following networking types:<br />
* '''empty''' - creates only loopback interface and assigns it to the container.<br />
* '''veth''' - a virtual etherned device is created with one side assigned to the container and other side attached to a bridge on LXC host. If the bridge is not specified, then the veth pair device will be created but not attached to any bridge. Using '''veth''' with '''bridge''' is useful when you want to create virtual networks for LXC containers and LXC host. <br />
* '''macvlan''' - a macvlan interface is created and assigned to the container. macvlan interfaces can only communicate to other macvlan interfaces on the same LXC host. This is useful when you want to create different networks for different LXC containers and you do not need to access LXC containers from LXC host via network.<br />
* '''vlan''' - a vlan interface is linked with the interface specified in container's configuration and is assigned to a the container.<br />
* '''phys''' - an already existing interface is assigned to the container. This is useful when you want to assign a physical network interface to a LXC container.<br />
* '''none''' - will cause container to use host's network namespace.<br />
<br />
It is possible to configure container with several network virtualization types at the same time. This wiki page will configure only one at a time for simplicity.<br />
<br />
In your container config file, you will need to assign an IP address:<br />
<br />
lxc.network.ipv4 = 192.168.100.2/24<br />
<br />
You can also specify default gateway:<br />
<br />
lxc.network.ipv4.gateway= 192.168.100.1<br />
<br />
<br />
====Terminal settings====<br />
<br />
The following configuration is optional. You may add them to your main configuration file if you wish to login via lxc-console, or through a terminal ( e.g.: {{ic|Ctrl+Alt+F1}} ).<br />
<br />
The container can be configured with virtual consoles (tty devices). These may be devices from the host that the container is given permission to use (by its configuration file) or they may be devices created locally within the container.<br />
<br />
The host's virtual consoles are accessed using the key sequence {{ic|Alt+Fn}} (or {{ic|Ctrl+Alt+Fn}} from within an X11 session). The left {{ic|Alt}} key reaches consoles 1 through 12 and the right {{ic|Alt}} key reaches consoles 13 through 24. Further virtual consoles may be reached by the {{ic|Alt+→}} key sequence which steps to the next virtual console.<br />
<br />
The container's local virtual consoles may be accessed using the "lxc-console" command.<br />
<br />
===== Host Virtual Consoles =====<br />
<br />
The container may access the host's virtual consoles if the host is not using them and the container's configuration allows it. Typical container configuration would deny access to all devices and then allow access to specific devices like this:<br />
<br />
lxc.cgroup.devices.deny = a # Deny all access to devices<br />
lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0<br />
lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1<br />
lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2<br />
<br />
For a container to be able to use a host's virtual console it must not be in use by the host. This will most likely require the host's {{ic|/etc/inittab}} to be modified to ensure no getty or other process runs on any virtual console that is to be used by the container.<br />
<br />
After editing the host's {{ic|/etc/inittab}} file, issung a {{ic|killall -HUP init}} will terminate any getty processes that are no longer configured and this will free up the virtual conosole for use by the container.<br />
<br />
Note that local virtual consoles take precedence over host virtual consoles. This is described in the next section.<br />
<br />
===== Local Virtual Consoles =====<br />
<br />
The number of local virtual consoles that the container has is defined in the container's configuration file (normally on the host in {{ic|/etc/lxc}}). It is defined thus:<br />
<br />
lxc.tty = n<br />
<br />
where {{ic|n}} is the number of local virtual consoles required.<br />
<br />
The local virtual consoles are numbered starting at tty1 and take precedence over any of the host's virtual consoles that the container might be entitled to use. This means that, for example, if n = 2 then the container will not be able to use the host's tty1 and tty2 devices even entitled to do so by its configuration file. Setting n to 0 will prevent local virtual consoles from being created thus allowing full access to any of host's virtual consoles that the container might be entitled to use.<br />
<br />
===== /dev/tty Device Files =====<br />
The container must have a tty device file (e.g. {{ic|/dev/tty1}}) for each virtual console (host or local). These can be created thus:<br />
# mknod -m 666 /dev/tty1 c 4 1<br />
# mknod -m 666 /dev/tty2 c 4 2<br />
<br />
and so on...<br />
<br />
In the above, {{ic|c}} means character device, {{ic|4}} is the major device number (tty devices) and {{ic|1}}, {{ic|2}}, {{ic|3}}, etc., is the minor device number (specific tty device). Note that {{ic|/dev/tty0}} is special and always refers to the current virtual console.<br />
<br />
For further info on tty devices, read this: http://www.kernel.org/pub/linux/docs/device-list/devices.txt<br />
<br />
'''If a virtual console's device file does not exist in the container, then the container cannot use the virtual console.'''<br />
<br />
===== Configuring Log-In Ability =====<br />
<br />
The container's virtual consoles may be used for login sessions if the container runs "getty" services on their tty devices. This is normally done by the container's "init" process and is configured in the container's {{ic|/etc/inittab}} file using lines like this:<br />
<br />
c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
<br />
There is one line per device. The first part {{ic|c1}} is just a unique label, the second part defines applicable run levels, the third part tells init to start a new getty when the current one terminates and the last part gives the command line for the getty. For further information refer to {{ic|man init}}.<br />
<br />
If there is no getty process on a virtual console it will not be possible to log in via that virtual console. A getty is not required on a virtual console unless it is to be used to log in.<br />
<br />
If a virtual console is to allow root logins it also needs to be listed in the container's {{ic|/etc/securetty}} file.<br />
<br />
===== Troubleshooting virtual consoles =====<br />
<br />
If lxc.tty is set to a number, n, then no host devices numbered n or below will be accessible even if the above configuration is present because they will be replaced with local virtual consoles instead.<br />
<br />
A tty device file's major number will change from 4 to 136 if it is a local virtual console. This change is visible within the container but not when viewing the container's devices from the host's filesystem. This information is useful when troubleshooting.<br />
<br />
This can be checked from within a container thus:<br />
<br />
# ls -Al /dev/tty*<br />
crw------- 1 root root 136, 10 Aug 21 21:28 /dev/tty1<br />
crw------- 1 root root 4, 2 Aug 21 21:28 /dev/tty2<br />
<br />
===== Pseudo Terminals =====<br />
<br />
lxc.pseudo = 1024<br />
<br />
Maximum amount of pseudo terminals that may be created in {{ic|/dev/pts}}. Currently, assuming the kernel was compiled with {{ic|CONFIG_DEVPTS_MULTIPLE_INSTANCES}}, this tells lxc-start to mount the devpts filesystem with the newinstance flag.<br />
<br />
====Host device access settings====<br />
<br />
lxc.cgroup.devices.deny = a # Deny all access to devices<br><br />
lxc.cgroup.devices.allow = c 1:3 rwm # dev/null<br />
lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero<br><br />
lxc.cgroup.devices.allow = c 5:1 rwm # dev/console<br />
lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty<br />
lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0<br><br />
lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandom<br />
lxc.cgroup.devices.allow = c 1:8 rwm # dev/random<br />
lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/*<br />
lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx<br><br />
# No idea what this is .. dev/bsg/0:0:0:0 ???<br />
lxc.cgroup.devices.allow = c 254:0 rwm<br />
<br />
=====Host device access settings explained=====<br />
<br />
'''lxc.cgroup.devices.deny''' : By settings this to ''a'', we are stating that the container has access to no devices unless explicitely defined within the configuration file.<br />
<br />
===Configuration file notes===<br />
====At runtime /dev/ttyX devices are recreated====<br />
If you have enabled multiple DevPTS instances in your kernel, lxc-start will recreate ''lxc.tty'' amount of {{ic|/dev/ttyX}} devices when it is executed.<br />
<br />
This means that you will have ''lxc.tty'' amount of pseudo ttys. If you are planning on accessing the container via a "real" terminal ({{ic|Ctrl+Alt+FX}}), make sure that it is a number that is inferior to ''lxc.tty''.<br />
<br />
To tell whether it has been re-created, just log in to the container via either lxc-console or SSH and perform a {{ic|ls -Al}} command on the tty. Devices with a major number of 4 are "real" tty devices whereas a major number of 136 indicates a pts.<br />
<br />
Be aware that this is only visible from within the container itself and not from the host.<br />
<br />
====Containers have access to host's TTY nodes====<br />
<br />
If you do not properly restrict the container's access to the /dev/tty nodes, the container may have access to the host's.<br />
<br />
Taking into consideration that, as previously mentioned, lxc-start recreates ''lxc.tty'' amount of /dev/tty devices, any tty nodes present in the container that are of a greater minor number than ''lxc.tty'' will be linked to the host's.<br />
<br />
=====To access the container from a host TTY=====<br />
<br />
# On the host, verify no getty is started for that tty by checking ''/etc/inittab''.<br />
# In the container, start a getty for that tty.<br />
<br />
=====To prevent access to the host TTY=====<br />
<br />
Please have a look at the configuration statements found in [[#Host device access settings|host device access settings]].<br />
<br />
Via the ''lxc.cgroup.devices.deny = a'' we are preventing access to all host level devices. And then, throuh ''lxc.cgroup.devices.allow = c 4:'''1''' rwm'' we are allowing access to the host's /dev/tty'''1'''. In the above example, simply removing all allow statements for major number 4 and minor > 1 should be sufficient.<br />
<br />
=====To test this access=====<br />
<br />
I may be off here, but looking at the output of the ''ls'' command below should show you both the ''major'' and ''minor'' device numbers. These are located after the user and group and represented as : 4, 2<br />
<br />
# Set lxc.tty to 1<br />
# Make there that the container has dev/tty1 and /dev/tty2<br />
# ''lxc-start'' the container<br />
# ''lxc-console'' into the container<br />
# ''ls -Al /dev/tty''<br>crw------- 1 root root 4, 2 Dec 2 00:20 /dev/tty2<br />
# ''echo "test output" > /dev/tty2''<br />
# ''Ctrl+Alt+F2'' to view the host's second terminal<br />
# You should see "test output" printed on the screen<br />
<br />
====Configuration troubleshooting====<br />
<br />
=====console access denied: Permission denied=====<br />
<br />
If, when executing lxc-console, you receive the error ''lxc-console: console access denied: Permission denied'' you have most likely either omitted lxc.tty or set it to 0.<br />
<br />
=====lxc-console does not provide a login prompt=====<br />
<br />
Though you are reaching a tty on the container, it most likely is not running a getty. You will want to double check that you have a getty defined in the container's ''/etc/inittab'' for the specific tty.<br />
<br />
If using '''systemd''' chances are that a problem with the ''getty@.service'' script will bite you. The script only starts a getty if ''/dev/tty0'' exists. And since this condition is not met in the container, you get no getty. Use this patch, to let ''lxc-console'' finally work.<br />
<br />
<pre><br />
--- /usr/lib/systemd/system/getty@.service.orig 2013-05-30 12:55:28.000000000 +0000<br />
+++ /usr/lib/systemd/system/getty@.service 2013-06-16 23:05:49.827146901 +0000<br />
@@ -20,7 +20,8 @@<br />
# On systems without virtual consoles, don't start any getty. (Note<br />
# that serial gettys are covered by serial-getty@.service, not this<br />
# unit<br />
-ConditionPathExists=/dev/tty0<br />
+ConditionVirtualization=|lxc<br />
+ConditionPathExists=|/dev/tty0<br />
<br />
[Service]<br />
# the VT is cleared by TTYVTDisallocate<br />
</pre><br />
<br />
For more than one getty you have to explicitly enable the needed service (and decrease ''lxc.tty'' in the container configuration) by doing this:<br />
# ln -sf /usr/lib/systemd/system/getty@.service /etc/systemd/system/getty.target.wants/getty@ttyX.service<br />
The ''ttyX'' should be replaced by the tty you want to use such as ''tty2''. In the ''real'' system a configurable number of getty-services is automatically created from the ''systemd-logind.service''<br />
<br />
===Configuring fstab===<br />
none $CONTAINER_ROOTFS/dev/pts devpts defaults 0 0<br />
none $CONTAINER_ROOTFS/proc proc defaults 0 0<br />
none $CONTAINER_ROOTFS/sys sysfs defaults 0 0<br />
none $CONTAINER_ROOTFS/dev/shm tmpfs defaults 0 0<br />
<br />
This fstab is used by lxc-start when mounting the container. As such, you can define any mount that would be possible on the host such as bind mounting to the host's own filesystem. However, please be aware of any and all security implications that this may have.<br />
<br />
'''Warning''' : You certainly do not want to bind mount the host's /dev to the container as this would allow it to, amongst other things, reboot the host.<br />
<br />
==Troubleshooting==<br />
<br />
===Container cannot be stopped when using systemd===<br />
<br />
''lxc-stop'' should be used for clean shutdown or reboot of the container, but only the ''reboot'' is working out of the box when using systemd.<br />
<br />
Shutdown will be signalled to the container with ''SIGPWR'' but current systemd doesn't have any services in place to handle the ''sigpwr.target''. But for the container we can simply reuse the ''poweroff.target'' and get exactly what we want.<br />
<br />
# ln -s /usr/lib/systemd/system/poweroff.target ${CONTAINER_RFS}/etc/systemd/system/sigpwr.target<br />
<br />
== See also ==<br />
<br />
*[https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/ LXC 1.0 Blog Post Series]<br />
*[http://www.ibm.com/developerworks/linux/library/l-lxc-containers/ LXC@developerWorks]<br />
*[http://docs.docker.io/en/latest/installation/archlinux/ Docker Installation on ArchLinux]</div>Joridoshttps://wiki.archlinux.org/index.php?title=ArchWiki:Contributing&diff=324003ArchWiki:Contributing2014-07-08T02:49:24Z<p>Joridos: /* Translating */</p>
<hr />
<div>[[Category:ArchWiki]]<br />
[[es:ArchWiki:Contributing]]<br />
[[ja:ArchWiki:Contributing]]<br />
[[zh-CN:ArchWiki:Contributing]]<br />
ArchWiki strives to be a clear, comprehensive and professional model of documentation. In order to reach that goal we have to organize the job in a rational and functional way: this article tries to explain schematically what are the most important tasks you can give your help to accomplish. <br />
<br />
This is a community effort, but if you take on the task seriously, a formal position as a [[ArchWiki:Maintenance Team|wiki maintainer]] may be in order.<br />
<br />
{{Note|Always use the Edit Summary when editing articles, see [[Help:Style#Edit summary]]. If you intend to perform large changes that involve several articles, please discuss your plans first in an appropriate discussion page.}}<br />
<br />
== Prerequisites ==<br />
Here is a list of pages contain information needed for contributing. <br />
* [[Help:Article naming guidelines]] discusses effective article naming strategies and considerations to ensure readability and improve navigation of the wiki. Read it before creating any new page.<br />
* [[Help:Editing]] outlines both widely-known MediaWiki markup and ArchWiki-specific guidelines. A must-read for any would-be contributors.<br />
* [[Help:Discussion]] guide you how to discuss with people in the talk page.<br />
* [[Help:Style]] gives guidelines for the standardization of style in ArchWiki articles. <br />
* [[Help:Category]] explain how to set page category, how to create missing categories pages and how to move a page to a different category. <br />
* [[ArchWiki Translation Team]] has a step by step introduction for creating a new translation page. Follow it to translate pages to your own language.<br />
* [[Help:i18n]] serves as a comprehensive guideline for ArchWiki internationalization and localization.<br />
See the [[:Category:Help|help category]] for all help articles.<br />
<br />
== Easy to start jobs ==<br />
Here are some easy jobs you can start with before moving to more difficult ones. <br />
<br />
=== Fix double redirects===<br />
# Read [[Help:Editing#Redirects|this section]] to understand what redirects are.<br />
# Check out [[Special:DoubleRedirects]] to see if there are any.<br />
# For example, if you see {{ic|Pastebin Clients (Edit) → Common Applications → List of applications}}, it means [[Pastebin Clients]] redirects to [[Common Applications]], and [[Common Applications]] redirects to [[List of applications]]. Therefore, [[Pastebin Clients]] is a double redirect.<br />
# To fix it, edit [[Pastebin Clients]] and change {{ic|<nowiki>#REDIRECT [[Common Applications]]</nowiki>}} to {{ic|<nowiki>#REDIRECT [[List of applications]]</nowiki>}} to skip the middleman.<br />
# Enter an edit summary such as {{ic|Fixed double redirect}} and save.<br />
<br />
===Use internal links===<br />
Replace external links that point to the ArchWiki by internal links. <br />
* list of [https://wiki.archlinux.org/index.php?title=Special%3ALinkSearch&target=http%3A%2F%2Fwiki.archlinux.org&namespace=0 http links]<br />
* list of [https://wiki.archlinux.org/index.php?title=Special%3ALinkSearch&target=https%3A%2F%2Fwiki.archlinux.org&namespace=0 https links]<br />
<br />
==Creating==<br />
Ensure you understand the philosophy of ArchWiki and think about what others may want to read (see [[ArchWiki:Requests]] for ideas). As mentioned, the wiki's scope is quite wide. <br />
<br />
Talk to the [[ArchWiki:Administrators|admins]] for help coordinating major projects.<br />
<br />
==Improving==<br />
Content editing, proofreading, and updating are never-ending tasks on any wiki. If you want to help, just register an account and start performing your magic.<br />
<br />
* Help fulfilling [[ArchWiki:Requests|requests]].<br />
* Add content to [[Special:WhatLinksHere/Template:Stub|stubs]] and expand [[Special:WhatLinksHere/Template:Expansion|incomplete]] or [[Special:WhatLinksHere/Template:Poor writing|poorly-written]] articles.<br />
* Correct [[Special:WhatLinksHere/Template:Accuracy|inaccurate]] or [[Special:WhatLinksHere/Template:Out of date|outdated]] content, spelling, grammar, language, and [[Help:Style|style]]. <br />
* Update the [[FAQ]] with relevant questions from the forum and remove obsolete questions.<br />
<br />
==Maintenance==<br />
* Help patrolling the [[Special:RecentChanges|Recent Changes]], reporting and fixing [[ArchWiki:Reports|questionable edits]].<br />
* Flag articles with appropriate [[Help:Template#Article status templates|article status templates]], like {{Ic|<nowiki>{{Deletion}}</nowiki>}}, {{Ic|<nowiki>{{Out of date}}</nowiki>}}, {{Ic|<nowiki>{{Accuracy}}</nowiki>}}, etc.<br />
* Add some articles to your watchlist and protect them against counter-productive edits.<br />
* Participate in discussions in the various talk pages: most users will be interested in following the most recent posts to generic discussions at [https://wiki.archlinux.org/index.php?namespace=1&title=Special%3ARecentChanges this link]. Maintenance-specific posts are instead listed in [https://wiki.archlinux.org/index.php?namespace=13&title=Special%3ARecentChanges], [https://wiki.archlinux.org/index.php?namespace=11&title=Special%3ARecentChanges], [https://wiki.archlinux.org/index.php?namespace=5&title=Special%3ARecentChanges] and [https://wiki.archlinux.org/index.php?namespace=15&title=Special%3ARecentChanges]; furthermore, [[ArchWiki:Requests]] is used as a talk page despite its namespace.<br />
<br />
==Organizing==<br />
Sorting, categorizing, and moving articles around has become a major task for all wiki maintainers implementing and improving the [[Table of contents|category tree]].<br />
<br />
* Reduce and [[Special:WhatLinksHere/Template:Merge|combine]] duplicate pages.<br />
* Improve wiki [[Table of contents|navigation]].<br />
* Categorize [[Special:UncategorizedPages|uncategorized pages]], [[Special:UncategorizedCategories|uncategorized categories]] and [[Special:WantedCategories|wanted categories]]. See [[Help:Style#Categories]] and [[Help:Style#Category pages]].<br />
* Fix [[Special:BrokenRedirects|broken]] and [[Special:DoubleRedirects|double]] redirects.<br />
* See [[Special:SpecialPages]] for other useful cleanup tools.<br />
<br />
==Translating==<br />
[[Special:WhatLinksHere/Template:Translateme|Add]] or [[Special:WhatLinksHere/Template:Bad translation|improve]] translations; ensure that translations are in sync with each other. Some languages have started collaboration projects (see list below) to efficiently organize the translation of the articles. Please consider joining your language's Translation Team or at least informing it when you are starting translating an article. <br />
* [[ArchWiki Translation Team]]<br />
* [[ArchWiki Translation Team (Español)]]<br />
* [[ArchWiki Translation Team (Hrvatski)]]<br />
* [[ArchWiki Translation Team (Italiano)]]<br />
* [[ArchWiki Translation Team (Polski)]]<br />
* [[ArchWiki Translation Team (Português)]]<br />
* [[ArchWiki Translation Team (Русский)]]<br />
* [[ArchWiki Translation Team (日本語)]]<br />
* [[ArchWiki Translation Team (简体中文)]]<br />
* [[ArchWiki Translation Team (Ελληνικά)]]<br />
* [[ArchWiki Translation Team (Català)]]<br />
<br />
Some languages use external wiki, they may require different steps for translation. [[Help:i18n]] has a list of external wikis.<br />
<br />
=== Special messages translation ===<br />
There are some special messages used in Arch Wiki, please provide their translation for your language in the following link:<br />
* Also in (Message in Categories tree): Add translation [[Talk:Table of contents#"also in" translations|here]]<br />
* Talkpagetext : Add translation [[MediaWiki talk:Talkpagetext#Talkpagetext translation|here]]<br />
<br />
==Brainstorming==<br />
If unsure where to begin, or you feel awkward about editing other people's work, you may also contribute by posting ideas and suggestions at [https://bbs.archlinux.org/viewforum.php?id=13 Forum & Wiki discussion] section of the [https://bbs.archlinux.org/ Arch Linux Forums].<br />
<br />
==Back-end maintenance==<br />
Use the [https://bugs.archlinux.org/ Arch Linux Bugtracker] to report bugs and contribute fixes and improvements to the MediaWiki codebase. See the [https://bugs.archlinux.org/index.php?string={wiki}&project=0 list of reported bugs] for the ArchWiki back-end.<br />
<br />
==Complaining==<br />
Yes, even generic complaints, if made in a civil manner, are a way to help improving the wiki! Please use [[ArchWiki:Complaints]] for that purpose.</div>Joridos