OpenSUSE 11.3 was a quite easy install. I haven't used SUSE since the early 'oughts, but first impressions were pretty good. OpenSUSE 11.3 is KDE-based, which is a change from the other distributions I've been using for the past few years -- Ubuntu on my server at home, Debian on my web and email server, and various Red Hat derivations at work -- and seems to be pretty well put together. The latest incarnation of YAST makes it more easily managable from the command line over a slow network connection than the latest Ubuntu or Red Hat, which rely on GUI tools at the desktop. The biggest difference between Red Hat and SUSE was that SUSE uses a different package dependency manager, "zypper", which is roughly equivalent to Red Yat's "yast" and Debian's "apt-get" but with its own quirks. It appears to be slightly faster than "yast" and roughly the same speed as "apt-get". If you wonder why SUSE/Novell wrote "zypper", at the time they wrote it, "yast" was excruciatingly slow and utterly unusable unless you had the patience of Job. Red Hat has sped up "yast" significantly since that time, but SUSE has stuck with "zypper" nevertheless. I also set up the bridging VLAN configuration that I mention in my previous post about how to do it on Fedora. Again SUSE has slightly different syntax than Red Hat for how to do this in /etc/sysconfig/network/* (note *not* network-scripts), but again it was fairly easy to figure out via reading the ifup / ifdown scripts and consulting SUSE's documentation.
So anyhow, I installed the virtualization environment via "yast" and rebooted into the Xen kernel downloaded by that. At that point I created a "win7" LVM volume in LVM volume group "virtgroup" on my 2TB RAID array, and went into the Red Hat "virt-manager" and attached to my Xen domain, then told it to use that LVM volume as the "C" drive and installed Windows 7 on it. I'm using a LVM volume because at work with KVM, I find that this gives significantly better disk i/o performance in my virtual machine than pointing the virtual disk drive at file on a filesystem. Since both Xen and KVM use Qemu to provide the virtual disk drive to the VM, I figured that the same issue would apply to Xen, and adopted the same solution that I adopted at work -- just point it at a LVM volume, already. (More on that later, maybe).
Okay, so now I have Windows 7-64 bit installed and running, so I shut it down and went to attach PCI devices to it via virt-manager and... err. No. Virt-manager wouldn't do it. Red Hat strikes again, it claims that Xen can't do PCI passthrough! So I went back to the handy Xen Wiki and started figuring out via trial and error how to use the "xm" command line, where the 'man' page for xm doesn't in any way reflect the actual function of the program that you see when you type 'xm help'. So here we go...
First, claw back the physical devices you're going to use via booting with them attached to pciback. So my module line for the xen.gz kernel in /boot/grub/menu.lst looks like...
module /vmlinuz-2.6.34.7-0.5-xen root=/dev/disk/by-id/ata-WDC_WD5000BEVT-22ZAT0_WD-WXN109SE2104-part2 resume=/dev/datagroup/swapvol splash=silent showopts pciback.hide=(02:00.0)(02:00.1)(00:1a.0)(00:1a.1)(00:1a.2)(00:1a.7)(00:1b.0)
Note that while XenSource has renamed pciback to 'xen-pciback', OpenSUSE renames it back to 'pciback' for backward compatibility with older versions of Xen. So anyhow, on my system, this hides the ATI card and its sound card component, and the USB controller to which the mouse and keyboard are attached. I leave the other USB controller attached to Linux. I did not have any luck pushing USB devices directly to the VM, I had to push the entire controller instead, apparently the Xen version of QEMU shipped with OpenSUSE 11.3 doesn't implement the USB (or else I simply need to read the source). Note that you want to make sure your system boots *without* the pciback.hide before you boot *with* it, because once the kernel starts booting and sees those lines, your keyboard, mouse, and video go away!
Okay, so now I'm booted. I ssh into the system via the network port (err, make sure that's set up before you boot with the pciback.hide too!) and go into virt-manager (via X11 displaying back to my Macbook, again make sure you have some way of displaying X11 remotely before you start this) and start up the VM. At that point I can do:
- xm list
- xm pci-attach win7 0000:02:00.0
- xm pci-attach win7 0000:02:00.1
- xm pci-attach win7 0000:00:1a.0
- xm pci-attach win7 0000:00:1a.1
- xm pci-attach win7 0000:00:1a.2
- xm pci-attach win7 0000:00:1a.7
- xm pci-attach win7 0000:00:1b.0
Windows Experience reports:
- Calculations per second: 7.6
- Memory: 7.8
- Graphics: 7.2
- Gaming graphics: 7.2
- Primary hard disk: 5.9
Okay, so now I have a one-off boot, but I want this to come up into Windows every time my server boots. I don't want to have to muck around with a remote shell and such every time I want to play Windows games on my vastly over-powered Linux server (let's face it, a Core I7-950 with 12GB of memory is somewhat undertasked pushing out AFS shares to a couple of laptops). And that, friends, is where part 4 comes in. But we'll talk about that tomorrow.
-ELG
No comments:
Post a Comment