The Book of Xen

Chris Takemura - Luke S. Crawford

Part 14

Report Chapter
webnovel
webnovel

Administering VMs with the XenCenter Having successfully installed both the Citrix server and the XenCenter client, we started our nice, shiny GUI frontend, told it about our XenServer, and logged in. The login process is straightforward and immediately drops you into a comprehensive two-panel interface, as shown in Figure11-1 Figure11-1.

The left panel displays hosts, both physical and virtual, in a tree view. On the right side, we can interact with the selected VM or change its parameters via the tabs at the top. Most tasks are broken out into a wizard-based interface.

These tasks are based on a concept of lifecycle management lifecycle management; you can create VMs, edit them, and destroy them, all by clicking through a series of dialog boxes. The user interface aims to make these steps, especially creation, as easy as possible; after all, part of the attraction of virtual computing appliances is that it's easy to add more machines or scale back as necessary.

Figure11-1.XenCenter console

Installing DomU Images XenServer offers several install methods: First, you can install from the included Debian Etch template. Second, you can install a supported distro using a template and distro-specific installer. Third, there is the HVM install using emulated devices. Finally, we have physical-to-virtual conversion using the P2V tool.

The Debian install is the fastest and easiest, but it's the least flexible. Templated installation is a good option, allowing PV installs of a good variety of Linux distros, though not all of them. The HVM install works well for everything but requires a machine that supports HVM and results in a domain running in HVM mode (which may be suboptimal, depending on your application). P2V allows you to clone an existing hardware-based Linux installation and to create templates based on that existing system, but it is inconvenient and works with only a few older distros.

Installing from the Debian Templates The easiest way to install a VM is to use the prepopulated Debian Etch template. This template is a preconfigured basic install, designed to make starting a Xen instance almost a one-click affair. (There is also a Debian Lenny template, but it is an installer, not a fully populated instance.) To install from a template, log in to the Xen host using the graphical interface, select the XenServer from the list in the left panel of the screen (it should be the first entry), right-click it, and select New VM. It will pop up a wizard interface that allows you to configure the machine. Select the Debian Etch template. Answer the questions about RAM and disk s.p.a.ce (the defaults should be fine), click Finish, and it'll create a guest.

After the guest boots, it performs some first-boot configuration, then starts as a normal paravirtualized Xen instance, with the text console and graphical console already set up and ready to work with, as shown in Figure11-2 Figure11-2.

Figure11-2.The graphical console of a freshly installed domU Templated Linux VM To ensure that the VM kernels are compatible with both Xen and the domU operating environment, the XenServer product supports installation of only a few RPM-based distros and, of course, the already-mentioned Debian VM templates. This support is implemented through templates, which are essentially canned VM configurations.

Installing a supported distro is almost as easy as installing the Debian templates. Go to the Install dialog, select your distro and an install source (physical media, ISO, or network), enter a name, tweak the parameters if desired, and click Install.

It's a bit harder to install an unsupported distro. However, the hardware emulation mode allows you to install any Linux distro by selecting the Other Install Media template and booting from the OS CD. From that point, proceed as with a normal install on hardware.

When you have the domain installed, you can configure it for paravirtualization and then convert it to a template.

Windows Install Install Windows by selecting the XenServer server, selecting Install VM from the context menu, and then filling in the resulting dialog. Select the template that corresponds to the version of Windows that you want to install, and change the CD-ROM/DVD setting so that it points to the install media.

Click Install. Xen will create a new VM. When the machine comes up, it'll be in HVM mode, with the HVM BIOS configured to boot from the emulated CD-ROM. From that point, you can install Windows in the ordinary way. It's really quite a turnkey process; Citrix has put a lot of work into making Windows installation easy.

Creating DomU Images with P2V The final way of installing is with the P2V install tool, short for physical to virtual physical to virtual. The tool creates domU images from physical Linux boxes, allowing you to install domUs on hardware that doesn't support HVM. Unfortunately, the P2V tool only supports a small number of Red Hatlike systems. Any other systems will cause it to quit with an error.

The tool comes as part of the XenServer install CD. To use it, boot the source machine from the XenServer CD. Interrupt the boot and enter p2v-legacy p2v-legacy at the prompt if you're virtualizing a 32-bit system, or boot to the installer and select the P2V option if you're virtualizing a 64-bit system. A series of prompts will guide you through network setup and selecting a destination. at the prompt if you're virtualizing a 32-bit system, or boot to the installer and select the P2V option if you're virtualizing a 64-bit system. A series of prompts will guide you through network setup and selecting a destination.

Existing filesystems on the machine will be copied and sent to the remote Citrix Xen server, which automatically creates configuration files and uses an appropriate kernel. Note that the P2V tool will conflate part.i.tions in the copy process. In this way it's similar to the tar(1) process that we describe in Chapter3 Chapter3, with added autoconfiguration magic.

Converting Pre-existing Virtual or Physical Machines with XenConvert If you have virtual machines in the VMware VMDK, Microsoft VHD, or cross-platform OVF formats, you can convert them to Xen virtual machines using Citrix's Windows-based XenConvert utility. XenConvert will also work on physical Windows machines, similarly to the P2V utility.

XenConvert is quite simple to use. First, download the Windows installer from Citrix's site, at http://www.citrix.com/xenserver_xenconvert_free. Install the package, run the program, and follow the prompts.

XenServer Tools in the DomU When you have a domain installed, you will almost certainly want to install the XenServer tools, which improve integration between the domain and the management interface. In particular, the tools allow the XenCenter to collect performance data from the domU. Under Windows, the Citrix tools also include paravirtualized drivers, which bypa.s.s the (slow) emulated drivers in favor of Xen-style ring buffers and so forth.

To install the tools, select the virtual machine in XenCenter, right-click it, and choose the Install XenServer Tools option from the context menu to switch the emulated CD. An alert box will pop up, advising you of the proper steps to perform.

Under Windows, the installer will autorun. Answer the prompts and let it go ahead with the install.[69] Reboot, selecting the PV option at the bootloader. Windows will detect and configure your new PV devices. Reboot, selecting the PV option at the bootloader. Windows will detect and configure your new PV devices.

With Linux VMs, as the prompt says, perform these commands: #mount/dev/xvdd/mnt #/mnt/Linux/install.sh The install.sh install.sh script will select an appropriate package and install it. Reboot and enjoy the utilization graphs. script will select an appropriate package and install it. Reboot and enjoy the utilization graphs.

xe: Citrix XenServer's Command-Line Tool Citrix also ships a command-line interface alongside the graphical console. This command-line tool is called xe xe, and it's recommended for backups and similar automated tasks. In our opinion, it's definitely less pleasant than the XenCenter for everyday use. It's probably just our bias, but it also seems more c.u.mbersome than the open source equivalent, xm xm.

You can use xe xe either from a separate management host (which can run either Windows or Linux) or in local mode directly on the XenServer host. either from a separate management host (which can run either Windows or Linux) or in local mode directly on the XenServer host.[70]

Citrix includes xe xe as an RPM on the Linux supplement CD in the as an RPM on the Linux supplement CD in the client_install client_install directory. Make sure you have the required stunnel package. In our case, to install it on Slackware, we did: directory. Make sure you have the required stunnel package. In our case, to install it on Slackware, we did: #cd/media/XenServer-5.0.0LinuxPack/client_install #rpm-ivh--nodepsxe-cli-5.0.0-13192p.i386.rpm When the client is installed on a remote machine, you can run it. Make sure to specify -s -s, otherwise it'll a.s.sume that you want to connect to the local host and fail.

#xehelp-scorioles.prgmr.com Whether you're using xe xe locally or remotely, the commands and parameters are the same. locally or remotely, the commands and parameters are the same. xe xe is actually a very thin wrapper around the Xen API. It exposes almost all the functionality offered by the API, with a corresponding difficulty of use. If you run it with the is actually a very thin wrapper around the Xen API. It exposes almost all the functionality offered by the API, with a corresponding difficulty of use. If you run it with the help --all help --all command, it outputs a daunting usage message, detailing a huge variety of possible actions. command, it outputs a daunting usage message, detailing a huge variety of possible actions.

Fortunately, we can break these commands into groups. In general, there are commands to interact with the host and with virtual machines. There are commands to get logging information. There are pool commands. We have commands to administer virtual devices such as vifs and vbds.

Although some of the xe xe commands are similar to commands are similar to xm xm commands, the commands, the xe xe syntax is a bit more elaborate. The first argument must be a command name, followed by any switches, followed by any command parameters, in syntax is a bit more elaborate. The first argument must be a command name, followed by any switches, followed by any command parameters, in name=value name=value syntax. It looks c.u.mbersome, but Citrix has shipped a very nice bash completion setup to make autocomplete work well for the syntax. It looks c.u.mbersome, but Citrix has shipped a very nice bash completion setup to make autocomplete work well for the xe xe-specific parameters. It even fills in UUIDs. Thus: #xmnetwork-list1 IdxBEMACAddr.handlestateevt-chtx-/rx-ring-refBE-path 0000:16:3E:B9:B0:53048522/523/local/domain/0/backend/vif/1/0 becomes, with xe xe: #xevm-vif-listvm-name=aufidius

name:eth0 mac:00:16:3E:B9:B0:53 ip:192.168.1.64 vbridge:xenbr0 rate:0 The doc.u.mentation and the various recipes that are offered on Citrix's website have more advice on using xe xe.

XenServer's Disk Management The XenServer software reserves a pair of 4GB part.i.tions for itself, leaving the rest of the disk available for domUs. The first part.i.tion has the active XenServer install. The second part.i.tion is ordinarily left blank; however, if the server is upgraded, that part.i.tion is formatted and used as a complete backup of the previous install.

WarningNote that this backup only applies to the dom0 data; the installer will wipe domU storage repositories on the disk. The moral? Back up domUs manually before upgrading XenSource.

The rest of the s.p.a.ce is put into a volume group, or, as Citrix calls it, a storage repository storage repository. As domUs are created, the server divides the s.p.a.ce using LVM. The storage setup, for a single disk, can be seen in Figure11-3 Figure11-3. Each additional disk becomes a single PV, which is added to the storage pool.

Figure11-3.XenSource disk layout Each LV gets a very long name that uses a UUID (universally unique identifier) to a.s.sociate it with a VM.

Xen Storage Repositories If you log in to the XenServer on the console or via SSH using the root pa.s.sword that you entered during the install, you can use standard Linux commands to examine the installed environment. To continue our example, you can use the LVM tools: #vgs VG#PV#LV#SNAttrVSizeVFree VG_XenStorage-03461f18-1189-e775-16f9-88d5b0db543f100wz--n-458.10G458.10G However, you'll usually want to use the Citrix-provided higher-level commands because those also update the storage metadata. Equivalently, to list storage repositories using xe xe: #xesr-list uuid(RO):03461f18-1189-e775-16f9-88d5b0db543f name-label(RW):Localstorage name-description(RW): host(RO):localhost.localdomain type(RO):lvm content-type(RO):user Note that the SR UUID matches the name of the volume group.

A complete description of xe xe's capabilities with regard to storage is best left to Citrix's doc.u.mentation. However, we'll describe a brief session to ill.u.s.trate the relationship between LVM, Xen's storage pools, and the hypervisor.

Let's say that you've added a new SATA disk to your XenServer, /dev/sdb /dev/sdb. To extend the default XenServer storage pool to the new disk, you can treat the storage pool as a normal LVM volume group: #pvcreate/dev/sdb1 Physicalvolume"/dev/sdb1"successfullycreated #vgextendVG_XenStorage-9c186713-1457-6edb-a6aa-cbabb48c1e88/dev/sdb1 Volumegroup"VG_XenStorage-9c186713-1457-6edb-a6aa-cbabb48c1e88"successfullyextended #vgs VG#PV#LV#SNAttrVSizeVFree VG_XenStorage-9c186713-1457-6edb-a6aa-cbabb48c1e88220wz--n-923.86G919.36G #servicexapirestart The only unusual thing that we've done here is to restart the xapi xapi service so that the various administration tools can use the new storage. service so that the various administration tools can use the new storage.

However, Citrix recommends that you perform these operations through their management stack. If you want to do anything more complex, like create a new storage repository, it's better to use the appropriate xe xe commands rather than work with LVM directly. Here's an example of the same operation, using commands rather than work with LVM directly. Here's an example of the same operation, using xe xe: #xesr-createname-label="SupplementaryXenStorage"type=lvmdevice-config-device=/dev/sdb1 a154498a-897c-3f85-a82f-325e612d551d That's all there is to it. Now the GUI should immediately show a new storage repository under the XenServer machine. We can confirm its status using xe sr-list xe sr-list: #xesr-list uuid(RO):9c186713-1457-6edb-a6aa-cbabb48c1e88 name-label(RW):Localstorageoncorioles name-description(RW): type(RO):lvm content-type(RO):user uuid(RO):a154498a-897c-3f85-a82f-325e612d551d name-label(RW):SupplementaryXenStorage name-description(RW): type(RO):lvm content-type(RO):disk Citrix's website has more information on adding storage with xe xe, including the options of using file-backed storage, iSCSI, or NFS. They also cover such topics as removing storage repositories and setting QoS controls on VM storage. We defer to them for further details.

Emulated CD-ROM Access One of the slickest things about Citrix's product is their CD-ROM emulation.[71] In addition to giving VMs the option of mounting the physical drives attached to the machine, it presents ISO images as possible CDs. When you change the CD, the domU immediately registers that a new disc has been inserted. In addition to giving VMs the option of mounting the physical drives attached to the machine, it presents ISO images as possible CDs. When you change the CD, the domU immediately registers that a new disc has been inserted.

XenServer looks for local ISO images in /opt/xensource/packages/iso /opt/xensource/packages/iso, and it looks for shared ISO images in /var/opt/xen/iso_import /var/opt/xen/iso_import. Both of these paths are on the server, not the admin host. Note that the XenServer host has a very limited root filesystem and devotes most of its disk s.p.a.ce to virtual machines; thus, we recommend using shared NFS or CIFS storage for ISOs. However, local ISO storage is still possible. For example, to make a Windows 2003 ISO conveniently accessible to the XenServer VM installer, we can: #ddif=/dev/cdromof=/opt/xensource/packages/iso/win2003.iso Then restart the xapi xapi service as before, and select the new ISO from the drop-down menu in XenCenter's graphical console tab. You can also use this ISO as an install source when creating virtual machines. service as before, and select the new ISO from the drop-down menu in XenCenter's graphical console tab. You can also use this ISO as an install source when creating virtual machines.

XenServer VM Templates The templates are one of the nicest features of XenCenter. They allow you to create a virtual machine with predefined specifications with a couple of clicks. Although Citrix includes some templates, you'll probably want to add your own.

The easiest way to create VM templates is to create a VM with the desired setup and then convert it to a template using the XenSource management software. Right-click the machine in the GUI and select Convert to Template. Conceptually, this is like the golden client golden client concept used by, say, SystemImager; you first tailor a client to meet your needs and then export it as the model for future installs. concept used by, say, SystemImager; you first tailor a client to meet your needs and then export it as the model for future installs.

#xevm-param-setuuid=is-a-template=true Another option is to use the P2V tool. To create a template from a physical machine, boot the machine from the XenServer CD as you would to create a VM, but direct the output of the P2V tool at an NFS share rather than a XenServer host. The template will show up in the XenCenter client's list of available templates.

*** You are reading on https://webnovelonline.com ***

[69] By the way, Citrix is, in fact, serious about the drivers not supporting pre-SP2 Windows XP. We tried. By the way, Citrix is, in fact, serious about the drivers not supporting pre-SP2 Windows XP. We tried.

Xen also has to handle memory a bit differently to accommodate unmodified guests. Because these unmodified guests aren't aware of Xen's memory structure, the hypervisor needs to use shadow page tables that present the illusion of contiguous physical memory starting at address 0, rather than the discontiguous physical page tables supported by Xen-aware operating systems. These shadows are in-memory copies of the page tables used by the hardware, as shown in Figure12-1 Figure12-1. Attempts to read and write to the page tables are intercepted and redirected to the shadow. While the guest runs, it reads its shadow page tables directly, while the hardware uses the pretranslated version supplied to it by the hypervisor.

Figure12-1.All guest page table writes are intercepted by the hypervisor and go to the shadow page tables. When the execution context switches to the guest, the hypervisor translates pseudophysical addresses found in the shadow page tables to machine physical addresses and updates the hardware to use the translated page tables, which the guest then accesses directly.

Device Access with HVM Of course, if you've been paying attention thus far, you're probably asking how the HVM domain can access devices if it hasn't been modified to use the Xen virtual block and network devices. Excellent question!

The answer is twofold: First, during boot, Xen uses an emulated BIOS to provide simulations of standard PC devices, including disk, network, and framebuffer. This BIOS comes from the open source Bochs emulator at http://bochs.sourceforge.net/. Second, after the system has booted, when the domU expects to access SCSI, IDE, or Ethernet devices using native drivers, those devices are emulated using code originally found in the QEMU emulator. A users.p.a.ce program, qemu-dm qemu-dm, handles translations between the native and emulated models of device access.

HVM Device Performance This sort of translation, where we have to mediate hardware access by breaking out of virtualized mode using a software device emulation and then reentering the virtualized OS, is one of the trade-offs involved in running unmodified operating systems.[76] Rather than simply querying the host machine for information using a lightweight page-flipping system, HVM domains access devices precisely as if they were physical hardware. This is quite slow. Rather than simply querying the host machine for information using a lightweight page-flipping system, HVM domains access devices precisely as if they were physical hardware. This is quite slow.

Both AMD and Intel have done work aimed at letting guests use hardware directly, using an IOMMU (I/O Memory Management Unit) to translate domain-virtual addresses into the real PCI address s.p.a.ce, just as the processor's MMU handles the translations for virtual memory.[77] However, this isn't likely to replace the emulated devices any time soon. However, this isn't likely to replace the emulated devices any time soon.

HVM and SMP SMP (symmetric multiprocessing) works with HVM just as with paravirtualized domains. Each virtual processor has its own control structure, which can in turn be serviced by any of the machine's physical processors. In this case, by physical processors physical processors we mean logical processors as seen by the machine, including the virtual processors presented by SMT (simultaneous mult.i.threading or hyperthreading). we mean logical processors as seen by the machine, including the virtual processors presented by SMT (simultaneous mult.i.threading or hyperthreading).

To turn on SMP, include the following in the config file: acpi=1 vcpus= (Where n n is an integer greater than one. A single CPU does not imply SMP. Quite the opposite, in fact.) is an integer greater than one. A single CPU does not imply SMP. Quite the opposite, in fact.) NoteAlthough you can specify more CPUs than actually exist in the box, performance will... suffer. We strongly advise against it.

Just as in paravirtualized domains, SMP works by providing a VCPU abstraction for each virtual CPU in the domain, as shown in Figure12-2 Figure12-2. Each VCPU can run on any physical CPU in the machine. Xen's CPU-pinning mechanisms also work in the usual fashion.

Unfortunately, SMP support isn't perfect. In particular, time is a difficult problem with HVM and SMP. Clock synchronization seems to be entirely unhandled, leading to constant complaints from the kernel with one of our test systems (CentOS 5, Xen version 3.0.3-rc5.el5, kernel 2.6.18-8.el5xen). Here's an example: TimerISR/0:Timewentbackwards:delta=-118088543delta_cpu=25911457shadow=157034917204 off=452853530processed=157605639580cpu_processed=157461639580Figure12-2.As each domain's time allocation comes up, its VCPU's processor state is loaded onto the PCPU for further execution. Privileged updates to the VCPU control structure are handled by the hypervisor.

One other symptom of the problem is in bogomips values reported by /proc/cpuinfo /proc/cpuinfo-on a 2.4GHz core 2 duo system, we saw values ranging from 13.44 to 73400.32. In the dom0, each core showed 5996.61-an expected value.

Don't worry, this might be unsettling, but it's also harmless.

HVM and Migration HVM migration works as of Xen 3.1. The migration support in HVM domains is based on that for paravirtualized domains but is extended to account for the fact that it takes place without the connivance of the guest OS. Instead, Xen itself pauses the VCPUs, while xc_save xc_save handles memory and CPU context. handles memory and CPU context. qemu-dm qemu-dm also takes a more active role, saving the state of emulated devices. also takes a more active role, saving the state of emulated devices.

The point of all this is that you can migrate HVM domains just like paravirtualized domains, using the same commands, with the same caveats. (In particular, remember that attempts to migrate an HVM domain to a physical machine that doesn't support HVM will fail ungracefully.)

[73] Intel has a nice introduction to their virtualization extensions at Intel has a nice introduction to their virtualization extensions at http://www.intel.com/technology/itj/2006/v10i3/3-xen/1-abstract.htm and a promotional overview page at and a promotional overview page at http://www.intel.com/technology/platform-technology/virtualization/index.htm. They're worth reading.

[74] Also, Gentle Reader, your humble authors lack a recent Itanium to play with. Please forward offers of hardware to Also, Gentle Reader, your humble authors lack a recent Itanium to play with. Please forward offers of hardware to

[75] AMD has a light introduction to their extensions at AMD has a light introduction to their extensions at http://developer.amd.com/TechnicalArticles/Articles/Pages/630200615.aspx.

[76] As Intel points out, the actual implementation of HVM drivers is much better than this naive model. For example, device access is asynchronous, meaning that the VM can do other things while waiting for I/O to complete. As Intel points out, the actual implementation of HVM drivers is much better than this naive model. For example, device access is asynchronous, meaning that the VM can do other things while waiting for I/O to complete.

[77] There's an interesting paper on the topic at There's an interesting paper on the topic at http://developer.amd.com/a.s.sets/IOMMU-ben-yehuda.pdf.

Xen HVM vs. KVM Of course, if your machine supports virtualization in hardware, you might be inclined to wonder what the point of Xen is, rather than, say, KVM or lguest.

*** You are reading on https://webnovelonline.com ***

Popular Novel