Xen

Purpose
Building xen for dom0 and domU. And getting the first domU running.

Overview
Xen is an open-source para-virtualizing virtual machine monitor (VMM), or 'hypervisor', for the x86 processor architecture. Xen can securely execute multiple virtual machines on a single physical system with close-to-native performance. Xen facilitates enterprise-grade functionality, including:


 * Virtual machines with performance close to native hardware.
 * Live migration of running virtual machines between physical hosts.
 * Up to 32 virtual CPUs per guest virtual machine, with VCPU hotplug.
 * x86/32, x86/32 with PAE, and x86/64 platform support.
 * Hardware virtualization support:
 * Intel Virtualization Technology (VT-x) for unmodified guest operating systems (including Microsoft Windows).
 * AMD Virtualization Technology (SVM aka Pacifica) on AM2 and F stepping Opterons (2006H2)
 * Excellent hardware support (supports almost all Linux device drivers).

Example Usage scenarios for Xen

 * Server Consolidation
 * Move multiple servers onto a single physical host with performance and fault isolation provided at the virtual machine boundaries.


 * Hardware Independence
 * Allow legacy applications and operating systems to exploit new hardware.


 * Multiple OS configurations
 * Run multiple operating systems simultaneously, for development or testing purposes.


 * Kernel Development
 * Test and debug kernel modifications in a sand-boxed virtual machine -- no need for a separate test machine.


 * Cluster Computing
 * Management at VM granularity provides more flexibility than separately managing each physical host, but better control and isolation than single-system image solutions, particularly by using live migration for load balancing.


 * Hardware support for custom OSes
 * Allow development of new OSes while benefiting from the wide-ranging hardware support of existing OSes such as Linux.

Installation Overview
These are the key items that you will need when setting up your system to use Xen.


 * Hypervisor
 * Hypervisor aware host OS
 * Xen aware Client Kernel ( or third party OSes)
 * Client Disk Images (custom built or prepackaged)
 * Management Tools
 * VM Configuration Files (custom built)

Each VM you run is called a Domain in Xen terms. Domain 0 (aka Dom0) is the master domain and replaces your normal Linux kernel. Through it you use the management tools to control other VMs. Other domains are unprivileged and are termed Domain U or DomU.

A key point to remember is that Xen requires the DomU systems to use special drivers to access hardware. Dom0 manages the hardware and its drivers act as a "backend" and manage access to the actual hardware.

Remember that the Xen 3.0 User Manual provides a large amount of authoritative information. It will help you understand many things about Xen that are not described in this HOWTO.

From a standard Gentoo system you will need to do the following to start with Xen:


 * Build the hypervisor (Xen)
 * Switch to using kernel source patched with Xen code (xen-sources)
 * Install the Xen management tools (xm, xend etc)
 * Build a Xen aware Dom0 kernel
 * Configure your boot loader to start Xen, which will in turn start your dom0

The Dom0 kernel will effectively replace your normal Linux kernel and will reuse the environment that you already had setup.

Once you have your system running Xen and Dom0 you can start configuring various DomUs.

In the simple case your DomU OS will be the same as your Dom0 OS but running off a different file system. In this case you can make your kernel configuration identical except for the Xen specific drivers.


 * Build or acquire a DomU kernel (vmlinuz)
 * Install the DomU kernel in your Dom0 system /boot partition
 * Create or acquire a disk image for your DomU
 * Write a VM configuration file in
 * Start the client VM with the management tools "xm"

Profile
Ensure the system is running the latest Gentoo profile (currently 2008.0). Using the latest profile will ensure you're using a recent version of glibc with nptl. You can use any 2006.1 or later profile (including desktop sub profile).

You can check what profile the system is using by checking the result of running:

Example profile list

The currently selected profile is displayed with an asterisk (*) next to it. In the above example the selected profile is default-linux/x86/2007.0.

If the system is not showing any recent profiles, then you need to update your local portage repository by running:

For details on how to change the system profile, see the Gentoo Upgrading Guide

See this post for more info : See this post for more info:

TLS and CFLAGS
Some software, in particular the glibc TLS library, is implemented in a way that will conflict with how Xen uses segment registers to circumvent a limitation of 32-bit x86 hardware platforms, causing poor performance whilst carrying out certain operations under Xen. This will result in a ~50% performance penalty running multi-threaded applications. To fix this, you must compile your system with the '-mno-tls-direct-seg-refs' flag.

Edit your and add '-mno-tls-direct-seg-refs' to your CFLAGS. This is similar to the Xen instructions to, but instead removes the trapped (slow) opcodes for every binary, not just glibc. If using the -Os flag (with any <gcc-4), change it to -O2, as the compiler is known to produce broken code otherwise.

You will also need to fix the CFLAGS for each domain you install. In practice, however, you will do this only once and save the result as your 'skeleton base' for all your domain Us. Following this articles method of using binary packages built by the host will also save you time.

nptlonly USE flag
The system must be using the USE flag. To check whether this USE flag is currently enabled, run:

If it shows the USE flag as being off, then you need to add it to your global USE flags in

Activating buildpkg
Your system is about to be rebuilt entirely, but to save time later (when building the domU installs, assuming you're going to install Gentoo on them), activate the buildpkg portage feature by adding it to FEATURES in by running:

This feature tells portage to create a binary package from every package it compiles and stores it in. For more information on this feature see or

Applying Changes
Note that this step may take quite some time as it will recompile every package on your system.

Update the system by running:

If you need an explanation of the flags used, run or

Windows and other Unmodified Guests in domU (a.k.a. HVM Guests)
If you have a processor with Intel Virtualization Technology (VT, previously known as Vanderpool) or AMD Secure Virtual Machine (SVM, previously known as Pacifica) technology, you can run unmodified guest operating systems like Windows XP, unmodified Linux distributions, *BSD, Solaris x86, etc. Processors with hardware virtualization capability include the Pentium D 9x0 series, Intel Core, Intel Core 2 and many AMD AM2 CPUs. (check for vmx flag (intel) and svm flag (amd) in /proc/cpuinfo!)

Before installing xen and xen-tools you will need to add to your USE Flags. This is at least required with the current xen-3.1.2 ebuilds.

More information can be found at Xen: MS Windows systems as guest.

Building the hypervisor and applications
Xen is still masked. Unmask it:

On you will also need to add  if you plan on using hvm.

Install the hypervisor and applications by running:

Add the xen daemon to the default runlevel with:

The xen ebuild installs the hypervisor, while the package installs both the xend daemon for controlling the virtual machines, and various command line tools.

To configure the network, make your changes in but DO NOT add  to runlevel default. will start and configure your network at boot time. (While testing the initial kernel build on a machine on a remote net connection it may be useful to leave enabled and NOT autostart xend)

Newer Gentoo automaticly loads even if it is disabled in the default runlevel. You can disable this behaviour by changing RC_PLUG_SERVICES variable in :

Building the kernel
There are two ways to build the kernel. You can do it manually, or you can have do it for you. Genkernel will also build an initrd for you, which is where you activate LVM, EVMS and DMRAID volumes.

Install the Xen kernel sources with:

In you will now find the sources required to build the kernel for a Xen domain.

It is recommended to build two separate kernels, one for domain 0, and one for domain U. You can use modules, but all drivers required to boot must be builtin.

Manually building the kernel
The Xen kernel can be difficult to configure - there are many options, some of which will cause your dom0 or domU kernels to fail on booting (eg. with errors opening the root device).

Separating dom0 and domU
So the xen-sources ebuilds only install one copy of the kernel-sources, but you have 2 separate configurations to maintain: one for dom0 and one for domU. So you might want to have 2 different ".config" files and two different trees of compiled binaries.

This can be achieved with the following aliases defined in:

From now on, you use "make0" or "makeU" instead of "make". For example will create the directory _domU and will store the ".config"-file in that subdirectory. "makeU all" will compile your domU kernel, and will store all binaries in that subdirectory. The same applies for make0 and the directory _dom0.

That way, you can manage both configuration with only one copy of the sources.

Now, for easy upgrading from one kernel to another, we create the script with the following content:

With that script, we can easily upgrade from let's say xen-sources-2.6.20-xen-r2 to xen-sources-2.6.20-xen-r3 with the following steps:

Domain 0 Kernel Configuration
The domain 0 kernel should contain drivers for Xen backend devices, and all of your usual hardware. That is, the dom0 configuration should enable all the options for backend drivers and disable all options for building in the frontend drivers. The frontend driver configuration options will be used when building the domU kernel so take note of them for later. In effect the backend driver allows the dom0 to talk directly to the hardware. Conversely, the frontend driver is a stub driver allowing the domU to efficiently call through to the dom0 to ask its backend driver to do the actual work.

Ethernet bridging support is required in order to bridge domain U kernels to a domain 0 device, as well as Network-device loopback driver. This is the default set by when a domain is created.

An alternative is to use IP routing in domain 0 if you want to keep domain U isolated from the external ethernet.

To even get at the Xen configuration options, you must make an appropriate selection under Processor type and features:

The configuration dialogue for the Xen kernel options has changed quite a bit since this tutorial was written. I'm posting the new stuff with a sample of a (not thoroughly tested) configuration based on this section and the notes added to it.

Other options are the same as they were the day this tutorial was written.

Now compile and install the Domain 0 kernel:

Domain U Kernel Configuration
The domain U kernel should contain only Xen frontend drivers since it has no real hardware. It is recommended that only the Xen specific items are different between the Dom0 and DomU kernel configuration files.

You can also use gentoo-sources, hardened-sources or another kernel >= 2.6.23, they have XEN client support included. You can find XEN-Drivers at Device Drivers ---> Block Devices ---> Xen virtual block device support and Device Drivers ---> Network device support ---> Xen network device frontend driver after you enabled Processor type and features ---> Paravirtualized guest support ---> Xen guest support

Now compile and install the Domain U kernel:

At the moment Xen can't boot from kernel images stored inside virtual machines, so you need to store them inside the domain 0 virtual machine. In this example they are stored in but since they aren't necessary to boot domain 0 you can put them anywhere in the Dom0 filesystem.

Only one domain 0 kernel
Most of this is taken from the Xen Wiki. Except the stuff about depmod.

Many users will be better off using the "-xen" kernel instead of "-xen0" and "-xenU" kernels. The -xen0/U kernels are used to achieve faster compile times in the dev process. Each kernel can be compiled independently and since only a small subset of kernel components are compiled the overall process can save a great deal of developer time. The -xen kernel is more like the kernels that come with many distributions (Redhat/Fedora, SuSE, Debian Etch). These kernels ship with a large number of components, like drivers for devices and file systems, compiled as modules. This allows these kernels to run on more hardware than the type of stripped down custom kernel you would find on an appliance. The -xen kernel will take longer to compile and will require a initrd but once built it will work on more hardware and "play well" with more distributions. Many of the recent problems reported on the user list would have been avoided by using the -xen kernel.

To build the -xen kernel edit the top level Makefile so that this line:

KERNELS ?= linux-2.6-xen0 linux-2.6-xenU

looks like this

KERNELS ?= linux-2.6-xen

Then build with:

You will get a single kernel and modules which can be used for both Domain0 and all DomainUs. Copy the modules directory /lib/modules/2.6. -xen to the /lib/modules directory of your VM and make an initrd with mkinitrd but first you have to create the dependencies for the module:

Next run mkinitrd as explained (mkinitrd is currently masked ~ in amd64, I merged it anyway and it "seems" to works):

You will need to add the initrd to your grub config under the -xen kernel line. It looks something like this (more on grub.conf below. Please also see the note on gunzipping the generated image below):

The same initrd can be used for the VM by adding the following to its config file.

ramdisk = "/boot/initrd-xen-3.0.img"

Using genkernel
There are a few kinks to work out if you wish to use genkernel to generate your kernel and initrd images.

When building >=-2.6.16 the current version of (3.4.0) fails due to a change in Xen that genkernel hasn't been updated to deal with. Fortunately, there's an easy fix from bug #120236:

First try to set: KERNEL_MAKE_DIRECTIVE="vmlinux" and KERNEL_BINARY="vmlinux" in /usr/share/genkernel/arch/x86_64/config.sh

If this doesn't work, try the following hack:

The patch is not perfect. If you have trouble while booting your kernel and get a message saying that the switch_root applet could not be found just execute

echo "CONFIG_SWITCH_ROOT=y" >> /usr/share/genkernel/x86-xen0/busy-config

to enable this applet. After that you have to rebuild your initial ramdisk.

You might need to adjust your /usr/src/linux symlink in order for genkernel to choose the right kernel.

Set the following options in genkernel.conf:

Now run genkernel all to build and install your kernel and initrd. You might need extra arguments to genkernel if you're using EVMS, LVM, DMRAID or similar - refer to man genkernel.

When the menu configuration pops up, you'll want to:


 * Choose the proper Processor family (under Processor type and features).
 * Enable support for your particular hardware.
 * Disable support for hardware you don't own.
 * Choose to build backend (xen0) or frontend (xenU) drivers or both (multi-dom kernel).
 * Enable 802.1d Ethernet Bridging in your xen0 kernel if you wish to bridge the virtual interfaces from your domU kernel to your external network interface (this is the default).
 * Otherwise, you probably want to make sure that IP routing is enabled.
 * Xen-sources 2.6.16.49 or greater may fail to compile when SCTP is enabled; disable it if you're not going to use it.

If it still won't compile (happened for me on AMD64 with xen-sources-2.6.16.49) read this Bugreport: http://bugs.gentoo.org/show_bug.cgi?id=177142

You need to choose a good place to put the domU vmlinuz images. At the moment Xen can't boot from kernel images stored inside virtual machines, so you need to store them in dom0. I just put them in /boot/, but since they aren't necessary to boot dom0 you can put them anywhere.

GRUB
The hypervisor is installed into /boot/xen.gz. It is booted in the same way as a kernel bzImage. Edit your GRUB config (you can just modify your old entry, replace kernel with module and add a kernel line pointing to xen.gz):

The dom0_mem hypervisor option sets the amount of memory to be allocated to domain0 (in this case 98MB). In Xen 3.x the parameter may be specified with a B, K, M or G suffix, representing bytes, kilobytes, megabytes and gigabytes respectively; if no suffix is specified, the parameter defaults to kilobytes. Note: 98M lead to insufficient memory errors with 2.6.16.18

The module line is used to select the domain 0 kernel image you want the hypervisor to run, and to pass in options to the kernel command line.

If your domain 0 uses an initrd, you can load that by adding another module line (xen wont work with genkernel initrd images. You literally need to gunzip then gzip the initrd file again to get it to boot. This is because the default image has a few bytes of garbage beyond the end of the file). For example to boot a non-enforcing SELinux system with EVMS on the root disk then try:

Alternative: LILO
For those who use lilo, which does not support the "module" directive of grub, there is still a way of achieving the desired functionality. An utility called mbootpack has to be used, in order to glue together the xen hypervisor with the dom0 kernel and the initrd image.

Initially, the xen hypervisor and dom0 kernel images have to be decompressed:

Afterwards, combine these two with the initrd image with the aid of mbootpack:

You should now have a compatible bzImage containing the xen hypervisor, the xen dom0 kernel and the initrd. Lastly, update your lilo configuration by adding the appropriate entry:

And lastly, don't forget to update the changes:

Alternative: PXELinux
Network booting (possibly with nfsroot file system) can ease setup and maintenance in some environments, such as a homogeneous cluster.

The Xen hypervisor and a domain 0 kernel can be booted using PXE. contains the PXELinux boot program; support for booting xen has been present since syslinux-3.08.

Follow the instructions in HOWTO Gentoo Diskless Install and Diskless Nodes with Gentoo to set up a boot server running dhcp and tftp.

You need to serve the following via tftp:


 * The Xen PXELinux binary, (Which will be installed to, if you emerge syslinux).
 * Xen hypervisor (xen.gz)
 * Your Xen domain 0 vmlinuz
 * initrd if you need to load modules

In your pxelinux config file add a single line like:

The three dashes --- are important and used to seperate the different modules.

You can omit the --- initrd- bit if you aren't using a ram disk for modules. Also you can use a hard disk rather than nfsroot by changing the root= to point to a block device (eg. root=/dev/hda, or root=/dev/md2 for raid).

Configure the BIOS of your Xen host to boot from the network via PXE (this can be well hidden - on a Dell PowerEdge server I had to enable Onboard Devices -> NIC w/PXE and reboot before Network Controller appeared in the Boot Sequence menu).

On booting you should see the BIOS screen, followed by the PXE loader doing DHCP and fetching PXELinux, then PXELinux booting and fetching the hypervisor and kernel, then the hypervisor booting, and finally the kernel booting and mounting the nfsroot fs from the server. Phew!

Quick Build method
Based on:
 * xen-sources: linux-2.6.34-xen


 * 1) emerge -v gentoolkit
 * 2) * Getting the revdep-rebuild tool
 * 3) emerge -v xen
 * 4) * This should get you xen, xen-sources and xen-tools
 * 5) nano /etc/make.conf
 * 6) * add '-mno-tls-direct-seg-refs' to CFLAGS
 * 7) * According to a note higher up, this should only be needed on a 32 bit cpu.
 * 8) emerge -v world
 * 9) revdep-rebuild
 * 10) grub-install --no-floppy /dev/sda
 * 11) eselect kernel list
 * 12) eselect set (xen)
 * 13) ls -l /usr/src
 * 14) * linux should point to the xen source
 * 15) cd /usr/src/linux
 * 16) make rmproper
 * 17) mkdir -p ~/build/dom0 ~/build/domU
 * 18) make O=~/build/dom0 mrproper
 * 19) make O=~/build/dom0 menuconfig
 * 20) * Configure the dom0 kernel
 * 21) * See
 * 22) make O=~/build/dom0 && make O=~/build/dom0 modules_install
 * 23) cp ~/build/dom0/vmlinux /boot
 * 24) Update /boot/grub/menu.lst
 * 25) * See below.
 * 26) make O=~/build/domU mrproper
 * 27) make O=~/build/domU menuconfig
 * 28) make O=~/build/domU
 * 29) mkdir /var/xen/kernel
 * 30) cp ~/build/domU/vmlinux /var/xen/kernel/gentoo
 * 31) mkdir /var/xen/data
 * 32) edit /var/xen/data/gentoo
 * 33) * See below.

zconf.tab.c: No such file or directory
Fix: Run first: make O=~/build/dom0 mrproper

make O=~/build/dom0 menuconfig GEN    /root/build/dom0/Makefile HOSTCC scripts/kconfig/zconf.tab.o gcc: /usr/src/linux-2.6.18-xen-r12/scripts/kconfig/zconf.tab.c: No such file or directory gcc: no input files make[2]: *** [scripts/kconfig/zconf.tab.o] Error 1 make[1]: *** [menuconfig] Error 2 make: *** [menuconfig] Error 2

root-nfs: no nfs server available giving up
Solution: go to a newer version of xen-sources.

linux-2.6.18-xen-r12 seems to not have support for SATA.

Or disable locking via the nolock NFS option

Xen