LXC

Linux containers (lxc) are the future standard for container-based virtualization under Linux, deprecating legacy solutions such as Linux-VServer and OpenVZ.

Introduction
Apparently lxc was initially created by IBM, and is currently being aggressively pushed to the mainline Linux kernel.

lxc is the future of Linux container-based virtualization, deprecating legacy solutions such as Linux-VServer and OpenVZ.

Unfortunately right now it's rather difficult to find reliable documentation on how to get things working, so I decided to create this page to share some info with other Gentoo users wishing to make use of the new functionality.

Virtualization concepts
This section is a basic overview of how lxc fits in to the virtualization world, the type of approach it uses, and the benefits and limitations thereof. If you are trying to figure out if lxc is for you, or it's your first time setting up virtualization under Linux, then you should at least skim this section. If you are already familiar with virtualization feel free to skip forward to.

Roughly speaking there are two types of virtualization in use today, container-based virtualization and full virtualization.

Container-based Virtualization (lxc)
Container based virtualization is very fast and efficient. It's based on the premise that an OS kernel provides different views of the system to different running processes. This sort of segregation or compartmentalisation (sometimes called "thick sandboxing") can be useful for ensuring guaranteed access to hardware resources such as CPU and IO bandwidth, whilst maintaining security and efficiency.

On the unix family of operating systems, it is said that container based virtualization has its roots in the 1982 release of the chroot tool, a filesystem subsystem specific container-based virtualization tool that was written by Sun Microsystems founder Bill Joy and published as part of 4.2BSD.

Since this early tool, which has become a mainstay of the unix world, a large number of unix developers have worked to mature more powerful container based virtualization solutions. Some examples:
 * Solaris Zones
 * FreeBSD Jails
 * Linux VServer
 * OpenVZ

On Linux, historically the major two techniques have been Linux-VServer (open source / community driven) and OpenVZ (a free spinoff of a commercial product).

However, neither of these will be accepted in to the Linux kernel. Instead Linus has opted for a more flexible, longer-term approach to achieving similar goals, using various new kernel features. lxc is the next-generation container-based virtualization solution that uses these new features.

Conceptually, lxc can be seen as a further development of the existing 'chroot' technique with extra dimensions added. Where 'chroot'-ing only offers isolation at the file system level, lxc offers complete logical isolation from a container to the host and all other containers. In fact, installing a new Gentoo container from scratch is pretty much the same as for any normal Gentoo installation.

Some of the most notably differences are:
 * each container will share the kernel with the host (and other containers). No kernel need to be present and/or mounted on the containers /boot directory;
 * devices and filesystem will be (more or less) 'inherited' from the host, and need not be configured as would apply for a normal installation;
 * if the host is using the openrc system for bootstrapping, such configuration items will "automagically" be omitted (i.e. filesystem mounts from fstab).

The last point is important to keep lxc based installation as much as simple and the same as for normal installations (no exceptions). At the time of writing, lxc and openrc are not quite in sync, and some version lock-ins occur (e.g. lxc-0.7.3-r1 with openrc-0.6.3). In general, the collaboration between openrc and lxc is not very well documented yet (even in this document). I believe here is also an important future role to play for the Gentoo documentalists, as nothing can be found yet on the Gentoo website about deploying Gentoo as lxc containers.

Full Virtualization (not lxc)
Full virtualization solutions are also known as paravirtualization solutions (if software components have been introduced, usually for the purposes of efficiency). This type of solution, unlike lxc and other container-based solutions, usually allow you to run any operating system, since the emulation platform actually gets right down to emulating the hardware in question. Whilst this may be useful for the purposes of security and server consolidation, it is hugely inefficient compared to container based solutions. The most popular solutions in this area right now are probably VMWare, KVM and Xen.

Limitations of lxc
With lxc, you can efficiently manage resource allocation in real time. In addition, you should be able to run different Linux distributions on the same host kernel in different containers (though there may be teething issues with startup and shutdown 'run control' (rc) scripts, and these may need to be modified slightly to make some guests work. That said, maintainers of tools such as openrc are increasingly implementing lxc detection to ensure correct behaviour when their code runs within containers.)

Unlike full virtualization solutions, lxc will not let you run other operating systems (such as proprietary operating systems, or other types of unix).

However, in theory there is no reason why you can't install a full or paravirtualization solution on the same kernel as your lxc host system and run both full/paravirtualised guests in addition to lxc guests at the same time.

Should you elect to do this, there is a powerful abstracted virtualization management API under development, known as libvirt, that you may wish to check out.

In short: ... but can co-exist with other virtualization solutions if required.
 * One kernel
 * One operating system
 * Many instances

MAJOR Temporary Problems with LXC - READ THIS
As documented over here, basically containers are not functional as security containers at present, in that if you have root on a container you have root on the whole box.
 * root in a container has all capabilities
 * Workaround:
 * Do not treat root privileges in the container any more lightly than on the host itself.
 * Use lxc.cap.drop to get rid of capabilities
 * legacy UID/GID comparisons in many parts of the kernel code are dumb and will not respect containers
 * Workaround:
 * Do not mount parts of external filesystems within a container, except ro (read only).
 * Do not re-use UIDs/GIDs between the container and the host

Containers are still useful for isolating applications, including their networking interfaces, and applying resource limits and accounting to those applications. As the above issues are resolved, they will also become functional security containers.

If you are designing a virtualisation solution for the long term and want a timeframe, then with appropriate disclaimers, judging from various comments and experience, an extremely rough timeframe might be 'circa end of 2012'. But no guarantees.

See also CAP_SYS_ADMIN: the new root and method to break out of an LXC container via sysfs (Note this can apparently be disallowed by a combination of: (1) dropping CAP_SYS_ADMIN (allows the mount system call within the container); and (2) not mounting /sys for the guest).

lxc Components
lxc uses two new / lesser known kernel features known as 'control groups' and 'POSIX file capabilities'. It also includes 'template scripts' to setup different guest environments.

Control Groups
Control Groups are a multi-hierarchy, multi-subsystem resource management / control framework for the Linux kernel.

In simpler language, what this means is that unlike the old chroot tool which was limited to the file subsystem, control groups let you define a 'group' encompassing one or more processes (eg: sshd, Apache) and then specify a variety of resource control and accounting options for that control group against multiple subsystems, such as:
 * filesystem access
 * general device access
 * memory resources
 * network device resources
 * CPU bandwidth
 * block device IO bandwidth
 * network priority
 * various other aspects of a control group's view of the system

The user-space access to these new kernel features is a kernel-provided filesystem, known as 'cgroup'. It is typically mounted at /sys/fs/cgroup/. This directory includes files similar to /proc and /sys representing the control group environment for the host system. Subdirectories represent the same information for control groups and their children in a hierarchical fashion.

While it is possible to mount all subsystems in one directory, apparently the norm is now to split them.

(For more background, see Coming to love control groups, Jonathan Corbet, 2011 Kernel Summit coverage, LWN.net, October 24, 2011.)

POSIX File Capabilities
POSIX file capabilities are a way to allocate privileges to a process that allow for more specific security controls than the traditional 'root' vs. 'user' privilege separation on unix family operating systems.

Sean Lynn's Linux containers HOWTO document states "These are needed because the lxc-start and lxc-execute programs turn off CAP_SYS_BOOT for container processes (ie. the ability to reboot the container). Found this quite by accident on Gentoo since I'd not enabled it by default".

For more information on POSIX file capabilities, see:
 * POSIX file capabilities: Parceling the power of root (lengthy article)
 * Stalkr's blog entry (more practical)
 * How Linux Capability Works in 2.6.25 (API / implementation overview)
 * There is also some info for Linux-VServer users, potentially of interest if you are still undecided about lxc and want some Linux-VServer features in addition to the POSIX capabilities.

Note that the kernel option to enable these appears to have been deprecated in recent kernels, ie: they are apparently now always available.

Host Setup
To get an lxc-capable host system working you will need the following components:
 * Kernel with the appropriate LXC related options enabled
 * (Probably, depending on your needs...)
 * Kernel with ethernet bridging enabled
 * A configured ethernet bridge
 * lxc userspace utilities
 * Mounted 'cgroup' filesystem (provides user-space access to the required kernel control group features)

Kernel with the appropriate LXC options enabled
If you are unfamiliar with recompiling kernels, see the copious documentation available on that subject in addition to the notes below.

Kernel options required
The complete list of relevant kernel options (tested on 2.6.35.7) is as follows. You can check your running kernel with the lxc-checkconfig script.

Freezer Support
Freezer support allows you to 'freeze' and 'thaw' a running guest, something like 'suspend' under VMWare products. It appears to be under heavy development as of October 2010 (LXC list) but is apparently mostly functional. Please add additional notes on this page if you explore further. CONFIG_CGROUP_FREEZER / "Freeze/thaw support" ('General Setup -> Control Group support -> Freezer cgroup subsystem')

Scheduling Options
Scheduling allows you to specify how much hardware access (CPU bandwidth, block device bandwidth, etc.) control groups have. CONFIG_CGROUP_SCHED / "Cgroup sched" ('General Setup -> Control Group support -> Group CPU scheduler') FAIR_GROUP_SCHED / "Group scheduling for SCHED_OTHER" ('General Setup -> Control Group support -> Group CPU scheduler -> Group scheduling for SCHED_OTHER') CONFIG_BLK_CGROUP / "Block IO controller" ('General Setup -> Control Group support -> Block IO controller') CONFIG_CFQ_GROUP_IOSCHED / "CFQ Group Scheduling support" ('Enable the block layer -> IO Schedulers -> CFQ I/O scheduler -> CFQ Group Scheduling support')

Resource Counters (Memory/Swap Accounting)
Resource counters are an 'accounting' feature - they allow you to measure resource utilisation in your guest. They are also an apparent prerequisite for limiting memory and swap utilisation. CONFIG_RESOURCE_COUNTERS / "Resource counters" ('General Setup -> Control Group support -> Resource counters')

For memory resources... CONFIG_CGROUP_MEM_RES_CTLR / "Cgroup memory controller" ('General Setup -> Control Group support -> Resource counters -> Memory Resource Controller for Control Groups')

If you want to also count swap utilisation, also select... CONFIG_CGROUP_MEM_RES_CTLR_SWAP / "Memory Resource Controller Swap Extension(EXPERIMENTAL)" ('General Setup -> Control Group support -> Resource counters -> Memory Resource Controller for Control Groups -> Memory Resource Controller Swap Extension')

CPU Accounting
This allows you to measure the CPU utilisation of your control groups. CONFIG_CGROUP_CPUACCT / "Cgroup cpu account" ('General Setup -> Control Group support -> Simple CPU accounting cgroup subsystem')

Networking Options
Ethernet bridging, veth, macvlan and vlan (802.1q) support are optional, but you probably want these. (Note that there is a new, more capable bridge mechanism known as OpenVSwitch under development that is included in new (3.3.x) kernels, you might want this if you are setting up large and complex virtualization environments across many nodes: ) CONFIG_BRIDGE / "802.1d Ethernet Bridging" ('Networking support -> Networking options -> 802.1d Ethernet Bridging') CONFIG_VETH / "Veth pair device" CONFIG_MACVLAN / "Macvlan" CONFIG_VLAN_8021Q / "Vlan"

LXC and grsec
These grsec chroot restrictions make LXC unusable and should be disabled: CONFIG_GRKERNSEC_CHROOT_MOUNT CONFIG_GRKERNSEC_CHROOT_DOUBLE CONFIG_GRKERNSEC_CHROOT_PIVOT CONFIG_GRKERNSEC_CHROOT_CHMOD CONFIG_GRKERNSEC_CHROOT_CAPS This post in Flameeyes's Weblog has more info on this issue.

Reconfigure an existing >2.6.29 kernel
If you already run a 2.6.29 or later kernel and are comfy with kernel reconfiguration, then use the lxc-checkconfig tool to list kernel options that you need to enable in order to make your existing kernel configuration lxc compatible. Process would be something like...

Then copy your kernel to your boot partition, reconfigure your boot loader, and reboot.

Upgrade from a <2.6.29 kernel
If your current kernel version is lower than 2.6.29 (find out with uname -r), then you will need to upgrade. You can use gentoo-sources, vanilla-sources, or any other kernel flavour. Assuming your new kernel is in /usr/src/newkernel and your old kernel is in /usr/src/oldkernel, follow these steps:

Then copy your kernel to your boot partition, reconfigure your boot loader, and reboot.

lxc userspace utilities
Because lxc is currently very new, it is probably worth making sure that you have the absolute latest version. Therefore, before we begin, you should ensure that your portage tree is up to date with the following command

Next, figure out which version of lxc is available with:

These are the packages that would be merged, in order:

Calculating dependencies... done!

!!! All ebuilds that could satisfy "lxc" have been masked. !!! One of the following masked packages is required to complete your request: - app-emulation/lxc-0.7.3-r1 (masked by: ~amd64 keyword) - app-emulation/lxc-0.7.2-r1 (masked by: ~amd64 keyword)

For more information, see the MASKED PACKAGES section in the emerge man page or refer to the Gentoo Handbook.

As you can see from the output, the package is masked by keyword. This means that you will not be able to install it without first manually overriding the block set by gentoo developers. To override the block above (masked due to insufficient amd64 architecture testing respectively), you would use:

Then retry the portage preview command...


 * 1) emerge -pv lxc

These are the packages that would be merged, in order:

Calculating dependencies... done! [ebuild N    ] app-emulation/lxc-0.7.3-r1  USE="-doc -examples -vanilla" 264 kB

Total: 1 package (1 new), Size of downloads: 264 kB

Now go ahead and install with...

When the installation has completed, you will probably see a number of kernel features listed that are not enabled. Don't worry about this for now, we'll set up an appropriate kernel shortly.

Mounted cgroup filesystem
The 'cgroup' filesystem provides user-space access to the required kernel control group features, and is required by the lxc userspace utilities. Recent kernels introduced as default location. Depending on the kernel version, may be already mounted. Otherwise, add add to :

cgroup         /sys/fs/cgroup          cgroup          rw              0 0

Networking: Virtual Ethernet (Host-Guest Only)
The simplest networking configuration outside of assigning a physical interface or physical interface VLAN that is accessible from the host is to assign a virtual ethernet (veth) adapter to the container.

This is the approach now taken with the lxc-gentoo automatic guest script.

Under this configuration, by default a new ethernet device appears on the host named (tun+&lt;random string^gt;) which links through to the guest's virtual ethernet device eth0.

The host interface name can be altered with the lxc.network.veth.pair directive.

The guest interface name can be altered with the lxc.network.name directive.

Additional connectivity can be granted to the guest using iptables.

Networking: Ethernet bridge
You may alternatively set up an ethernet bridge. Note that this requires the CONFIG_BRIDGE symbol to be enabled in your kernel.

Note that since 2011 or so it's apparently feasible to set up a more complex virtual topology with full support for VLANs and other features by using Open vSwitch. (This has not been tested - please return and add your experiences if you venture forward).

Installation
To check if the tools are already installed for configuring and modifying a bridge, use the portage preview command...


 * 1) emerge -pv net-misc/bridge-utils

These are the packages that would be merged, in order:

Calculating dependencies... done! [ebuild N    ] net-misc/bridge-utils-1.4  32 kB

Total: 1 package (1 new), Size of downloads: 32 kB

If you see this, the tools are not installed yet. Go ahead and install with...

Configuration notes
I appreciate the gentle Gentoo way of describing network devices in one centralised configuration file (/etc/conf.d/net), but I have not (yet) been able to work out how to describe a correct bridge. Issuing the following commands (see section will suffice as well (in given order):


 * 1) HOSTIP="192.168.1.48" # use actual IP address of your hosts eth0 device
 * 2) HOSTGW="192.168.1.1"  # use actual IP address of your hosts gateway
 * 3) brctl addbr br0
 * 4) brctl setfd br0 0
 * 5) ifconfig br0 ${HOSTIP} promisc up # <-- why promisc?
 * 6) brctl addif br0 eth0 # <-- this line seems to destroy networking if eth0 is a preconfigured primary outbound interface ... bad!
 * 7) ifconfig eth0 0.0.0.0 up
 * 8) route add -net default gw ${HOSTGW}

(If you read this, and know how to create a bridge 'the Gentoo way', please give an example of a proper bridge description using /etc/conf.d/net here.)

OpenRC /etc/conf.d/network configuration
Here's how I'm doing it using the new /etc/conf.d/network configuration file (I believe the old /etc/conf.d/net and associated scripts are only there for backward compatibility):

In the container's config file:

If, like me, you are using dhcp inside the container to get an IP address, then run it once as shown. LxC will generate a random MAC address for the interface. To keep your DHCP server from getting confused, you will want to use that MAC address all the time. So find out what it is, and then uncomment the 'lxc.network.hwaddr' line and specify it there.

OpenRC /etc/conf.d/net configuration
As an example, bridge configuration with DHCP: config_eth0="null" bridge_br0="eth0" config_br0="dhcp" rc_need_br0="net.eth0"

The last line is important as eth0 needs to be up before br0 can be created.

Create a new service entry for net.br0

Add both br0 and eth0 to the run level.

More documentation can be found in /usr/share/doc/openrc-0.6.3/net.example.

Bridge / Virtual switch scenario
There may be cases that you do not want or can have your network interface added to the bridge (wireless interface cannot be added to the bridge; more network control of the container,...). This scenario describes how to set up a guest if you wish to share an existing physical interface (eg: eth0) with it, but still maintain control how much access it is granted via iptables.

Because of NAT, containers must have their network addresses in a range different from your host network address. In this example, the LXC containers have an address in the 172.20.xxx.xxx range and the default gateway use 172.20.0.1. The container has the ip 172.20.0.88

Host configuration
First, we need to set up a bridge (our 'virtual switch') in 'the gentoo way', by adding the bridge device to the /etc/conf.d/net file as follows.

Next, create the init script and start the interface as follows:

Finally, to make sure the bridge is automatically set up on subsequent boots, run:

To grant the guest access to the internet, you will need to use iptables. If it's not installed, first emerge it.

Allow ip forwarding in your or with the following command:

Add the iptables rules to grand masqueraded access to the internet. For example (substitute 'eth0' with your external facing physical interface):

This is equivalent to: EXTIF=eth0 # external facing physical interface IP=`ifconfig $EXTIF|grep 'inet addr'|cut -d ':' -f2|cut -d ' ' -f1` iptables -t nat -A POSTROUTING -o $EXTIF -j SNAT --to-source $IP

Save the configuration and ensure it is restored at boot:

Guest configuration
Your guest network configuration resides in the guest's lxc.conf file. Documentation for this file is accessible with:

If you have used a template script to create your guest, this will typically reside in the parent directory of the guest's root filesystem. However, using /etc/lxc/ to store guest configurations is also common.

Your guest configuration should include the following network-related lines:

Template scripts
A number of 'template scripts' are distributed with the lxc package. These scripts assist with generating various guest environments.

Template scripts live in /usr/lib64/lxc/templates/ but should be executed via the lxc-create tool as follows:

Configuration files (the -f configuration-file option) are usually used to specify the network configuration for the guest. For example: lxc.network.type=macvlan lxc.network.link=eth0 lxc.network.flags=up ... or ... lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=up

The template scripts included in app-emulation/lxc-0.7.2-r1 are:
 * lxc-busybox assists with setting up minimal guests using Busybox (see busybox.net)
 * lxc-debian assists with setting up Debian guests (see debian.org). Note that in order to use lxc-debian, you must:


 * lxc-fedora assists with setting up Fedora guests (see fedoraproject.org). Note that in order to use lxc-fedora, you must:
 * You will also need to install febootstrap tool from http://people.redhat.com/~rjones/febootstrap/. An ebuild has been created but is not yet in portage:
 * You will also need to install febootstrap tool from http://people.redhat.com/~rjones/febootstrap/. An ebuild has been created but is not yet in portage:


 * lxc-sshd assistss with setting up minimal sshd guests (see openssh.com).
 * lxc-ubuntu assist with setting up Ubuntu guests (see ubuntu.com). Note that in order to use lxc-ubuntu, you must:
 * Usage is as follows...
 * This will create the folder ubuntu-guest'. Inside the folder, there will be a file called config''. To start the container, issue the following command:
 * You should use the username and password of the existing system user used when creating the container.
 * To set root password, enter the directory ubuntu-guest and you will see the directory rootfs. Issue:
 * Set the password with the command:
 * You should use the username and password of the existing system user used when creating the container.
 * To set root password, enter the directory ubuntu-guest and you will see the directory rootfs. Issue:
 * Set the password with the command:
 * Set the password with the command:

Automatic setup: lxc-gentoo
The lxc-gentoo tool can download, extract and configure a gentoo guest for you. It fixes a lot of little issues that you may otherwise find tedious and are not yet outlined in the manual guest configuration section, below.

You can download it here: lxc-gentoo page, and additional developers, bug fixes, comments, etc. are welcome. To install the gentoo container, write

Manual Guest Configuration
LXC allows a configuration file for each guest container, specifying name, IP address, etc. In the latest lxc package (lxc-0.7.2-r1), the init scripts expect guest configuration to be at /etc/lxc/.conf.

In fact, this is also the location used by the following userland tools: lxc-create and lxc-destroy. Lxc-create will place a given configuration file for  in that location with the following command:

If you did not use lxc-create to make a copy of your original configuration file at location /etc/lxc/, but you put it there by hand, be aware that lxc-destroy will just delete your (original) file when you issue:

An example guest configuration is:

lxc.utsname = # name of your guest container lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 # name of your network bridge, for bridged networking lxc.network.ipv4 = 192.168.0.2/24 # IP address of guest lxc.network.name = eth0 lxc.mount = /etc/lxc/ .fstab # fstab lxc.rootfs = /var/lxc/ # the location of the guest root filesystem lxc.tty = 12

You also need a fstab, the location specified by the lxc.mount configuration above (ie. /etc/lxc/ .fstab). Replace /var/lxc/ with the correct location of your container root filesystem.

none /var/lxc/ /dev/pts devpts defaults 0 0 none /var/lxc/ /proc   proc   defaults 0 0 none /var/lxc/ /sys    sysfs  defaults 0 0 none /var/lxc/ /dev/shm tmpfs defaults 0 0

The final command enables the guest to be started via init scripts.

Filesystem Setup
To set up gentoo as a guest system from scratch, you first need to create the root filesystem in a certain directory, eg: /lxc/ /rootfs or /var/lxc/ 

The process for doing this is similar to a normal Gentoo install (ie: as documented in the Gentoo Handbook). First you unpack a stage3 archive in to the directory, then chroot in and perform additional configuration as per the handbook.

However, there are some important differences, as of openrc-0.6.1-r1:
 * You must remove udev (not needed as of openrc-0.6.3. OpenRC will skip udev.)
 * What is udev? Traditionally device nodes were created manually on unix systems.  These days, however, people expect to add and remove hardware on the fly and want it to 'just work'.  After some abortive attempts, udev was developed and has become the normal solution for most Linux systems (read: non-virtualised systems).  It is a user-space daemon that interprets kernel device change events, creating device nodes, running scripts, and generally being helpful wherever necessary.  Unfortunately, since lxc creates a virtual system running on the same kernel as the host machine, udev is not the right solution.
 * To remove it, do one or both of the following :
 * carefully checking that you are within your container chroot you need to 'emerge -C udev' and ignore the warning. (WARNING: BE CAREFUL NOT TO DO THIS ON YOUR HOST SYSTEM!)
 * Change RC_DEVICES="auto" to RC_DEVICES="static" in the container's /etc/conf.d/rc file
 * You can probably simply copy most of the make.conf settings from your host system (assuming it's Gentoo).
 * Skip kernel installation
 * Skip boot loader installation (grub/lilo/whatever)
 * Note that the boot process will be different to - and much faster than - a normal system.
 * Recent openrc includes an lxc boot mode that can automatically avoid attempting to run irrelevant startup processes (hostname, hwclock, fsck, localmount, modules, mount-ro, numlock, procfs, root, swap, swclock, urandom).
 * If you are happy to use the guest name or the 'lxc.utsname' setting specified on the host as your hostname within the container, then you should also run rc-update del hostname boot to remove hostname initialisation from the openrc boot process.

Option: Shared /usr/portage/distfiles
If you want to share distfiles from your host, you can set the PORTAGE_RO_DISTDIRS variable to a space-separated list of directories to search. Portage will create a symlink in DISTDIR to the first matching file found in PORTAGE_RO_DISTDIRS if the file does not already exist in DISTDIR.

Busybox
lxc contains a minimal template script for busybox. Busybox is basically a base system oriented towards embedded use, where many base utilities exist in an optimized form within one stripped binary to save on memory. Busybox is installed as part of the base gentoo system, so the script works right away. Example:

Arch Linux
See the excellent documentation available on the Arch Linux Wiki.

Debian
You will need to install dev-util/debootstrap package.

You can then use the lxc supplied debian template script to download all required files, generate a configuration file and a root filesystem for your guest.

Fedora
Summary: In short this is a pain to get working right now.

First you will need to install sys-apps/yum. This depends on >=python-2 compiled with sqlite USE flag. In addition, sqlitecachec is masked on amd64 - so you need another step if you are on that platform. Try:

You will also need to install febootstrap tool from http://people.redhat.com/~rjones/febootstrap/. An ebuild has been created but is not yet in portage:

Once the above is complete, use the lxc supplied template script to set up your guest. It will download all required files and generate a root filesystem image, reconfigure the system to live within the container, and generate an lxc guest configuration file:

openSUSE
See the excellent openSUSE LXC Container Setup documentation.

Manual use
To start the guest, simply run:

To stop the guest, run:

If for any reason guest fails to start, see the error messages by running:

Please be aware, that when you have daemonized the booting process (-d), you will not get any output on screen. This might happen when you conveniently use an alias wich daemonizes by default, and forgot about it. You may get puzzled later by this if there is problem while booting a new the container that has not been configured properly (e.g. network).

Use from gentoo init system
If you have made a symbolic link in /etc/init.d for each guest container you have created then instead of using the LXC userland tools directly you can start and stop a guest as follows:

Of course, the use of such scripts is primarily intended for booting and stopping the system. To make a guest to the rc chain, run:

To enter a (already started) guest directly from the host machine, see the lxc-console section below.

Logging console output
As of 0.7.3, you can reportedly use a line similar to the following to log all output. lxc.console=/var/log/lxc/someguest.console

Otherwise try:

lxc-console
Using lxc-console provides console access to the guest. To use type:

If you get a message saying lxc-console: console denied by guestname, then you need to add to your container config: lxc.tty = 1

To exit the console, use:

Note that unless you log out inside the guest, you will remain logged on, so the next time you run lxc-console, you will return to the same session.

Usage of lxc-console should be restricted to root. It should be primarily a tool for system administrators (root) to enter a (newly) container after it is first created, e.g. when the network connection is not properly configured yet. Using multiple instances of lxc-console on distinct guests works fine, but starting a second instance for a guest that is already governed by another lxc-console session, leads to redirection of keyboard input and terminal output. Best is to avoid use of lxc-console at all. (Perhaps lxc developers should enhance the tool in such way that is only possible for singleton use per guest. ;-)

Accessing the container with sshd
A common technique to allow users direct access into a system container is to run a separate sshd inside the container. Users then connect to that sshd directly. In this way, you can treat the container just like you treat a full virtual machine where you grant external access. If you give the container a routable address, then users can reach it without using ssh tunneling.

If you set up the container with a virtual ethernet interface connected to a bridge on the host, then it can have its own ethernet address on the LAN, and you should be able to connect directly to it without logically involving the host (the host will transparently relay all traffic destined for the container, without the need for any special considerations). You should be able to simply 'ssh '.

syslog-ng
The following message may appear in the logs when using syslog-ng inside the container: syslog-ng internal messages are looping back, preventing loop by suppressing further messages; recurse_count='2' To the rid of these messages, output to tty12 has to be drop out. To achieve this, comment out the following line in, like in this example:


 * Reference: This article in Playing on the frontier

Checkpoint/Restore Patches
There are a series of patches being produced by Sukadev Bhattiprolu of IBM for checkpoint/restore functionality. Personally I have not played with it and do not know what the difference is to 'freezer'. Download here:
 * http://lxc.sourceforge.net/patches/lxc+usercr/lxc-0.7.2/lxc-patches/

See documentation here:
 * LXC-USERCR Wiki

Host Filesystem Layout
The /etc/init.d/lxc init script expects guest configurations to be in /etc/lxc/.conf.

However, you should keep the guests' root filesystems out of /etc since it's not a path that's supposed to store large volumes of binary data.

Personally I use the following locations:
 * /etc/lxc/.conf = guest configuration file
 * /etc/lxc/.fstab = guest fstab file
 * /var/lxc/<guestname>/ = guest filesystem root (symlinked to another location, ie. not installed physically under /etc)
 * /var/log/lxc/<guestname> = guest lxc-start logfile

Running X inside a container
It is possible to run X inside a container.

You should take care about lxc.tty = n you set in your lxc guest config: all ttyX above n will be host's ones. X running by default on vt7, it will use /dev/tty7 of your host and not guest's one.

(It could be possible to set lxc.tty = n above 7 to use guest's /dev/tty7 but I didn't try.)

You could launch X on another virtual terminal with:

You have also to allow access to some devices needed by X in lxc guest config. For example, /dev/mem or some input devices according to your needs: ... lxc.cgroup.devices.deny = a ... lxc.cgroup.devices.allow = c 1:1 rwm lxc.cgroup.devices.allow = c 13:63 rwm ...
 * 1) deny access to all devices by default, explicitly grant some permissions
 * 2)  format is [c|b] [major|*]:[minor|*] [r][w][m]
 * 3) char/block -'     `- device number    `-- read, write, mknod
 * 4) first deny all...
 * 1) char/block -'     `- device number    `-- read, write, mknod
 * 2) first deny all...
 * 1) first deny all...
 * 1) /dev/mem
 * 1) dev/input/mice

See http://www.kernel.org/doc/Documentation/devices.txt for more reference about device majors/minors.

General

 * Available networking setup for guests needs to be better documented
 * Because guest network configuration is presently impossible to perform solely from the host (in particular, route configuration) containers remain less manageable and portable than they could otherwise be
 * Gentoo init scripts might not be LXC aware. Because the host process sees all processes from the guest containers, if you are running the same service on the host and the guest (ie. samba), the init script may contain a killall process command that will kill processes inside guest containers. To mitigate this, init scripts may need to remove killall commands and provide the pidfile to start-stop-daemon.
 * Most init scripts use start-stop-daemon in conjunction with PID files already. Please file a bug for those that do not.  I've added one for Samba:
 * Some daemons use setrlimit NPROC to limit the number of processes with the same UID (i.e. Avahi). When installed on hosts, this could cause the daemon to fail when forking as container processes may use the same UID thereby breaching their limit (details).

lxc

 * Whilst lxc.* directives in lxc.conf can set up a network interface and its netmask, there is no current way to add a default route... which makes networking config impossible to isolate to the host machine, and therefore guest management and portability is not ideal.

Historical problems

 * "kernel before 2.6.35 does not support physical interface moving across namespace" (Ubuntu LXC community page)

Credits
Information was adapted from:
 * /dev/random (once the correct lxc.device.* directive was added!)
 * failing memory
 * personal experience setting up lxc
 * the lxc-* manpages
 * IRC, particularly #lxcontainers on irc.freenode.net, particularly dlezcano (Daniel Lezcano, LXC author/maintainer, IBM) and mhelsley (Matt Helsley, Linux Kernel Engineer, IBM) - thanks guys!
 * emails to various people
 * Gentoo LXC hosts and Gentoo LXC guests by Ropardo Software in Romania. (Note: Some info out of date for current openrc.)
 * Flameeyes' lxc posts - very good info (wrote the lxc ebuild + responded to emails)
 * Linux containers HOWTO by Sean Lynn (Gentoo focus at present, but rather light on detail)
 * LXC: Linux Container Tools at IBM Developer Works (very high level)

Disclaimer
While this page is supposed to be accurate, info has come from all over the place and may not necessarily reflect the truth, the whole truth, and nothing but the truth ... so don't count on it... ;) And if you find something wrong - FIX IT!