The Book of Xen

Chris Takemura - Luke S. Crawford

Part 4

Report Chapter
webnovel
webnovel

[23] Except when they're QCOW images. Let's ignore that for now. Except when they're QCOW images. Let's ignore that for now.

Ma.s.s Deployment Of course, all this is tied up in the broader question of provisioning infrastructure provisioning infrastructure and higher-level tools like Kickstart, SystemImager, and so on. Xen amplifies the problem by increasing the number of servers you own exponentially and making it easy and quick to bring another server online. That means you now need the ability to automatically deploy lots of hosts. and higher-level tools like Kickstart, SystemImager, and so on. Xen amplifies the problem by increasing the number of servers you own exponentially and making it easy and quick to bring another server online. That means you now need the ability to automatically deploy lots of hosts.

Manual Deployment The most basic approach (a.n.a.logous to tarring up a filesystem tarring up a filesystem) is probably to build a single tarball using any of the methods we've discussed and then make a script that part.i.tions, formats, and mounts each domU file and then extracts the tarball.

For example: #!/bin/bash

LVNAME=$1

lvcreate-Cy-L1024-n${LVNAME}lvmdisk

parted/dev/lvmdisk/${LVNAME}mklabelmsdos parted/dev/lvmdisk/${LVNAME}mkpartfsprimaryext201024

kpartx-p""-av/dev/lvmdisk/${LVNAME}

tune2fs-j/dev/mapper/${LVNAME}1

mount/dev/mapper/${LVNAME}1/mountpoint

tar-C/mountpoint-zxf/opt/xen/images/base.tar.gz

umount/mountpoint

kpartx-d/dev/lvmdisk/${LVNAME}

cat>/etc/xen/${LVNAME}

name="$LVNAME"

memory=128 disk=['phy:/dev/lvmdisk/${LVNAME},xvda,w']

vif=['']

kernel="/boot/vmlinuz-2.6-xenU"

EOF

exit0 This script takes a domain name as an argument, provisions storage from a tarball at /opt/xen/images/base.tar.gz /opt/xen/images/base.tar.gz, and writes a config file for a basic domain, with a gigabyte of disk and 128MB of memory. Further extensions to this script are, as always, easy to imagine. We've put this script here mostly to show how simple it can be to create a large number of domU images quickly with Xen. Next, we'll move on to more elaborate provisioning systems.

QEMU and Your Existing Infrastructure Another way to do ma.s.s provisioning is with QEMU, extending the QEMU installation we previously outlined. Because QEMU simulates a physical machine, you can use your existing provisioning tools with QEMU-in effect treating virtual machines exactly like physical machines. For example, we've done this using SystemImager to perform automatic installs on the emulated machines.

This approach is perhaps the most flexible (and most likely integrates best with your current provisioning system), but it's slow. Remember, KQEMU and Xen are not compatible, so you are running old-school, software-only QEMU. Slow! And needlessly slow because when a VM has been created, there's nothing to keep you from duplicating it rather than going through the entire process again. But it works, and it works the exact same way as your previous provisioning system.[24]

We'll describe a basic setup with SystemImager and QEMU, which should be easy enough to generalize to whichever other provisioning system you've got in place.

Setting Up SystemImager First, install SystemImager using your method of choice-yum, apt-get apt-get, download from http://wiki.systemimager.org/-whichever. We downloaded the RPMs from SystemImager using the sis-install script: #wgethttp://download.systemimager.org/pub/sis-install/install #shinstall-v--download-only--tag=stable--directory.systemconfigurator systemimager-clientsystemimager-commonsystemimager-i386boot-standard systemimager-i386initrd_templatesystemimager-server SystemImager works by taking a system image of a golden client golden client, hosting that image on a server, and then automatically rolling the image out to targets. In the Xen case, these components-golden client, server, and targets-can all exist on the same machine. We'll a.s.sume that the server is dom0, the client is a domU that you've installed by some other method, and the targets are new domUs.

Begin by installing the dependency, systemconfigurator systemconfigurator, on the server: #rpm-ivhsystemconfigurator-*

Then install the server packages: #rpm-ivhsystemimager-common-*systemimager-server-* systemimager-i386boot-standard-*

Boot the golden client using xm create xm create and install the packages (note that we are performing these next steps within the domU rather than the dom0): and install the packages (note that we are performing these next steps within the domU rather than the dom0): :/path/to/systemimager/*.

#rpm-ivhsystemconfigurator-*

#rpm-ivhsystemimager-common-*systemimager-client-* systemimager-i386boot-initrd_template-*

SystemImager's process for generating an image from the golden client is fairly automated. It uses rsync rsync to copy files from the client to the image server. Make sure the two hosts can communicate over the network. When that's done, run on the client: to copy files from the client to the image server. Make sure the two hosts can communicate over the network. When that's done, run on the client: #si_prepareclient--server Then run on the server: #si_getimage--golden_client--imageporter--exclude/mnt The server will connect to the client and build the image, using the name porter porter.

Now you're ready to configure the server to actually serve out the image. Begin by running the si_mkbootserver si_mkbootserver script and answering its questions. It'll configure DHCP and TFTP for you. script and answering its questions. It'll configure DHCP and TFTP for you.

#si_mkbootserver Then answer some more questions about the clients: #si_mkclients Finally, use the provided script to enable netboot for the requisite clients: #si_mkclientnetboot--netboot--clientslennoxrosseangus And you're ready to go. Boot the QEMU machine from the emulated network adapter (which we've left unspecified on the command line because it's active by default): #qemu--hda/xen/lennox/root.img--bootn Of course, after the clients install, you will need to create domU configurations. One way might be to use a simple script (in Perl this time, for variety): #!/usr/bin/perl $name=$ARGV[0]; open(XEN,'>',"/etc/xen/$name"); printXEN

memory=128 name="$name"

disk=['tap:aio:/xen/$name/root.img,hda1,w']

vif=['']

root="/dev/hda1ro"

CONFIG close(XEN); (Further refinements, such as generating an IP based on the name, are of course easy to imagine.) In any case, just run this script with the name as argument: #makeconf.pllennox And then start your shiny new Xen machine: #xmcreate-c/etc/xen/lennox Installing pypxeboot Like PyGRUB, pypxeboot is a Python script that acts as a domU bootloader. Just as PyGRUB loads a kernel from the domain's virtual disk, pypxeboot loads a kernel from the network, after the fashion of PXEboot (for Preboot eXecution Environment) on standalone computers. It accomplishes this by calling udhcpc udhcpc (the micro-DHCP client) to get a network configuration, and then TFTP to download a kernel, based on the MAC address specified in the domain config file. (the micro-DHCP client) to get a network configuration, and then TFTP to download a kernel, based on the MAC address specified in the domain config file.

pypxeboot isn't terribly hard to get started with. You'll need the pypxeboot package itself, udhcp, and tftp. Download the packages and extract them. You can get pypxeboot from http://book.xen.prgmr.com/mediawiki/index.php/pypxeboot and udhcp from and udhcp from http://book.xen.prgmr.com/mediawiki/index.php/udhcp. Your distro will most likely include the tftp client already.

*** You are reading on https://webnovelonline.com ***

The pypxeboot package includes a patch for udhcp that allows udhcp to take a MAC address from the command line. Apply it.

And Then...

In this chapter, we've gone through a bunch of install methods, ranging from the generic and brute force to the specialized and distro-specific. Although we haven't covered anything in exhaustive detail, we've done our best to outline the procedures to emphasize when you might want to, say, use yum yum, and when you might want to use QEMU. We've also gestured in the direction of possible pitfalls with each method.

Many of the higher-level domU management tools also include a quick-and-easy way to install a domU if none of these more generic methods strike your fancy. (See Chapter6 Chapter6 for details.) For example, you're most likely to encounter for details.) For example, you're most likely to encounter virt-install virt-install in the context of Red Hat's in the context of Red Hat's virt-manager virt-manager.

The important thing, though, is to tailor the install method to your needs. Consider how many systems you're going to install, how similar they are to each other, and the intended role of the domU, and then pick whatever makes the most sense.

Chapter4.STORAGE WITH XEN

Throughout this book, so far, we've talked about Xen mostly as an integrated whole, a complete virtualization solution solution, to use marketing's word. The reality is a bit more complex than that. Xen itself is only one component of a platform that aims to free users from having to work with real hardware. The Xen hypervisor virtualizes a processor (along with several other basic components, as outlined in Chapter2 Chapter2), but it relies on several underlying technologies to provide seamless abstractions of the resources a computer needs. This distinction is clearest in the realm of storage, where Xen has to work closely with a virtualized storage layer to provide the capabilities we expect of a virtual machine.

By that we mean that Xen, combined with appropriate storage mechanisms, provides near total hardware independence. The user can run the Xen machine anywhere, move the instance about almost at will, add storage freely, save the filesystem state cleanly, and remove it easily after it's done.

Sounds good? Let's get started.

Storage: The Basics The first thing to know about storage-before we dive into configuration on the dom0 side-is how to communicate its existence to the domain. DomUs find their storage by examining the domU config file for a disk= disk= line. Usually it'll look something like this: line. Usually it'll look something like this: disk=[ 'phy:/dev/cleopatra/menas,sda,w', 'phy:/dev/cleopatra/menas_swap,sdb,w'

This line defines two devices, which appear to the domU as sda sda and and sdb sdb. Both are physical,[26] as indicated by the as indicated by the phy: phy: prefix-other storage backends have their own prefixes, such as prefix-other storage backends have their own prefixes, such as file: file: and and tap: tap: for file-backed devices. You can mix and match backing device types as you like-we used to provide a pair of for file-backed devices. You can mix and match backing device types as you like-we used to provide a pair of phy: phy: volumes and a file-backed read-only "rescue" image. volumes and a file-backed read-only "rescue" image.

We call this a line, but it's really more of a stanza-you can put the strings on separate lines, indent them with tabs, and put s.p.a.ces after the commas if you think that makes it more readable. In this case, we're using LVM, with a volume group named cleopatra cleopatra and a pair of logical volumes called and a pair of logical volumes called menas menas and and menas_swap menas_swap.

NoteBy convention, we'll tend to use the same name for a domain, its devices, and its config file. Thus, here, the logical volumes menas menas and and menas_swap menas_swap belong to the domain belong to the domain menas, menas, which has the config file which has the config file /etc/xen/menas /etc/xen/menas and network interfaces with similar names. This helps to keep everything organized and network interfaces with similar names. This helps to keep everything organized.

You can examine the storage attached to a domain by using the xm block-list xm block-list command-for example: command-for example: #xmblock-listmenas VdevBEhandlestateevt-chring-refBE-path 204900468/local/domain/0/backend/vbd/1/2049 205000479/local/domain/0/backend/vbd/1/2050 Now, armed with this knowledge, we can move on to creating backing storage in the dom0.

[26] As you may gather, a As you may gather, a physical physical device is one that can be accessed via the block device semantics, rather than necessarily a discrete piece of hardware. The prefix instructs Xen to treat the device as a basic block device, rather than providing the extra translation required for a file-backed image. device is one that can be accessed via the block device semantics, rather than necessarily a discrete piece of hardware. The prefix instructs Xen to treat the device as a basic block device, rather than providing the extra translation required for a file-backed image.

Varying Types of Storage It should come as little surprise, this being the world of open source, that Xen supports many different storage options, each with its own strengths, weaknesses, and design philosophy. These options broadly fall into the categories of file based file based and and device based device based.

Xen can use a file file as a block device. This has the advantage of being simple, easy to move, mountable from the host OS with minimal effort, and easy to manage. It also used to be very slow, but this problem has mostly vanished with the advent of the blktap driver. The file-based block devices differ in the means by which Xen accesses them (basic loopback versus blktap) and the internal format (AIO, QCOW, etc.). as a block device. This has the advantage of being simple, easy to move, mountable from the host OS with minimal effort, and easy to manage. It also used to be very slow, but this problem has mostly vanished with the advent of the blktap driver. The file-based block devices differ in the means by which Xen accesses them (basic loopback versus blktap) and the internal format (AIO, QCOW, etc.).

Xen can also perform I/O to a physical physical device. This has the obvious drawback of being difficult to scale beyond your ability to add physical devices to the machine. The physical device, however, can be anything the kernel has a driver for, including hardware RAID, fibre channel, MD, network block devices, or LVM. Because Xen accesses these devices via DMA (direct memory access) between the device driver and the Xen instance, mapping I/O directly into the guest OS's memory region, a domU can access physical devices at near-native speeds. device. This has the obvious drawback of being difficult to scale beyond your ability to add physical devices to the machine. The physical device, however, can be anything the kernel has a driver for, including hardware RAID, fibre channel, MD, network block devices, or LVM. Because Xen accesses these devices via DMA (direct memory access) between the device driver and the Xen instance, mapping I/O directly into the guest OS's memory region, a domU can access physical devices at near-native speeds.

*** You are reading on https://webnovelonline.com ***

Popular Novel