Monday, November 15, 2010

Pushing a graphics card into a VM, part 5

Part 1 Part 2 Part 3 Part 4 Part 5

So here's the final thumbnail summary:

Hardware:

  • Video card #1: ATI 5750 (but the 5770 should work too and is slightly faster, but the 5750 was on the Xen compatibility list).
  • Video card #2: nVidia Corporation VGA G98 [GeForce 8400 GS] *PCI* card (BIOS set to use PCI as first card)
  • Intel(r) Desktop Board DX58SO -- not a great motherboard, but it was available at Fry's and was on the Xen VT-d compatibility list
  • Intel Core I7-950 processor
  • 12GB of Crucial 3x4GB DDR3 RAM
  • Hard drives: 2 Hitachi 7200 rpm SATA 2GB drives, configured as RAID1 via Linux software RAID.
  • Antec Two Hundred V2 gamer case to handle swapping OS's via the front 2 1/2" drive port.
  • Various 2 1/2" drives to hold the Linux OS's that I was experimenting with
Software:
  • OpenSUSE 11.3, *stock*.
  • Windows 7, *stock*.
By adding the PCI card, my Linux console remains my Linux console, and Xen properly starts up my Windows DomU. My configuration is now complete. I may extend my Windows LVM volume to 200G so I can install more games on it, but note that all of my personal files, ISO's, etc. live on Linux. Note that 5.9 is what the Windows Performance Index should be for that particular hard drive combo, so this Windows system is as good as most mid-range gaming systems performance-wise. I added the paravirtualization drivers for the networking and disk controller, but they didn't improve performance any -- all they did was reduce how much CPU the dom0 qemu was expending implementing virtual disk and network controllers. Given that I have a surplus of CPU on this system (8 threads, 3.2ghz), it's in retrospect no surprise that I saw no performance gain on the disk and network from going paravirtual -- all I did was free up more CPU for use for things like, say, video encoding.

Thoughts and conclusions:

One thing that was very clear through this entire process is that I'm very much pushing beyond the state of the art here. The software and hardware configurations needed for this to work were very twiddly -- there is exactly one (1) Linux distribution (OpenSuse 11.3) which will do it at this point in time, and there were no GUI tools for OpenSuse 11.3 which would create a Xen virtual machine with the proper PCI devices. Furthermore, the experimental Xen 4.01 software on OpenSuse is almost entirely undocumented -- or, rather, it has man pages provided with it, but the man pages document an earlier version of Xen which is significantly different from what's actually shipped with OpenSuse 11.3.

From a general virtualization perspective, comparing Xen, KVM, and ESXi, Xen currently wins on capabilities but only by a hair, and those capabilities are almost totally undocumented -- or worse yet, don't work the way the documentation says they work. Xen's only fundamental technological advantage over KVM and ESXi right now is its ability to run paravirtualized Linux distributions without needing the VT-x and VT-d extensions -- a capability which is important for ISP's with tens of thousands of older servers without these extensions, but becoming increasingly less important as VT-x is now everywhere except in the low-end Atom processors. Comparing my Xen installation at home with my KVM installation at work, both of which I have now used extensively and pushed their capabilities to their limits, I can see why Red Hat is pushing the KVM merger of hypervisor and operating system -- KVM gives you significantly greater ability to monitor the overall performance of your system, vs. Xen where 'xm top' is a poor substitute for being able to get detailed monitoring of your overall system performance, is significantly better at resource management since the same resource manager handles everything (core hypervisor/dom0 plus VM's), and the Linux scheduler can consider everything when deciding what to schedule, rather than having the Xen hypervisor out in the background making decisions about which Xen domain to schedule next based upon very little information.

In short, my general conclusion is that KVM is the future of Linux virtualization. Unfortunately my experience with both KVM and Xen 4.0 is that both are somewhat immature compared to VMware's ESX and ESXi products. They are difficult to manage, their documentation is persistently out of date and often incorrect, and both have a bad tendency to crash cryptically when doing things that they're supposed to be able to do. Their core functionality works well -- I've been running Internet services on Xen domains for over five years now and for that problem domain it is bullet-proof, while at work I am developing for several different variants of Linux using KVM virtual machines on Fedora 14 as well as running a Windows VM to handle the VSphere management tools, and it's been bullet-proof. But they decidedly are not as polished as VMware at this point, other than Citrix's XenServer, which lacks the PCI passthrough capability of ESXi and thus was not useful for the projects I was considering.

My take on this, however, is that VMware's time as head of the virtualization pack is going to be short. There isn't much more that they can add to their platform that the KVM and Xen people aren't already working on. Indeed, the graphics passthrough capability of Xen is already beyond where VMware is. At some point VMware is going to find themselves in the same position vs. open source virtualization that SGI and Sun found themselves in vs. open source POSIX. You'll note that SGI and Sun are no longer in business...

-ELG

Sunday, November 14, 2010

Pushing a graphics card into a VM, part 4

Part 1 Part 2 Part 3 Part 4 Part 5

Okay, so virt-manager did pick up my new VM once I created it with xm create on a config file, but when I rebooted the system the VM was gone. So how can I fix this? Well, by taking advantage of functionality that OpenSUSE has had for auto-starting Xen virtual machines all along: Just move my config file into /etc/xen/auto and it'll auto start (and auto shutdown, if I have the xen tools installed) at system boot.

Of course, that requires a config file. Rather than paste it here, I'll let you view the config file as a text file. Note that 'gfx_passthru=1' is commented out. The Xen documentation says I need it, but if I put it there, my VM doesn't start up -- it crashes into the QEMU monitor. Also I ran into another issue, a timing issue. pciback is grabbing the console away from Linux and leaving the video card half-initialized, and when Xen grabs the video card and shoves it into the VM, the video card locks up the system solid when Windows tries to write to it. My solution to that was even simpler -- put the older of the nVidia cards back into the system, and load the 'nouveau' driver using YaST's System > Kernel > INITRD_MODULES and System > Kernel > MODULES_LOADED_ON_BOOT functionality. This flips the console away from the ATI card early enough that it doesn't conflict with Xen giving the video card to Windows. This also gives me a Linux console on the nVidia card that I can switch to by plugging in a second keyboard to the front USB on my chassis (the one I did *not* push into the Windows VM) and flipping my monitor to its DVI input (rather than the HDMI coming from the ATI card).

With all of this done, I can now reboot my system and get Windows on video card 0, and Linux on video card 1. I suppose I could reverse the video cards (to give the boot video card to Linux), unfortunately my board puts the second 16-lane PCIe slot too close to the bottom of the case and a double-width PCIe card won't fit there. Maybe when I upgrade to one of those spiffy SuperMicro server motherboards with the IMPI and such, at which point I won't need a second video card anyhow because the on-board video will suffice for Linux...

Next up in Part 5: Thoughts and conclusions.

Saturday, November 13, 2010

Pushing a graphics card into a VM, Part 3

Part 1 Part 2 Part 3 Part 4 Part 5

OpenSUSE 11.3 was a quite easy install. I haven't used SUSE since the early 'oughts, but first impressions were pretty good. OpenSUSE 11.3 is KDE-based, which is a change from the other distributions I've been using for the past few years -- Ubuntu on my server at home, Debian on my web and email server, and various Red Hat derivations at work -- and seems to be pretty well put together. The latest incarnation of YAST makes it more easily managable from the command line over a slow network connection than the latest Ubuntu or Red Hat, which rely on GUI tools at the desktop. The biggest difference between Red Hat and SUSE was that SUSE uses a different package dependency manager, "zypper", which is roughly equivalent to Red Yat's "yast" and Debian's "apt-get" but with its own quirks. It appears to be slightly faster than "yast" and roughly the same speed as "apt-get". If you wonder why SUSE/Novell wrote "zypper", at the time they wrote it, "yast" was excruciatingly slow and utterly unusable unless you had the patience of Job. Red Hat has sped up "yast" significantly since that time, but SUSE has stuck with "zypper" nevertheless. I also set up the bridging VLAN configuration that I mention in my previous post about how to do it on Fedora. Again SUSE has slightly different syntax than Red Hat for how to do this in /etc/sysconfig/network/* (note *not* network-scripts), but again it was fairly easy to figure out via reading the ifup / ifdown scripts and consulting SUSE's documentation.

So anyhow, I installed the virtualization environment via "yast" and rebooted into the Xen kernel downloaded by that. At that point I created a "win7" LVM volume in LVM volume group "virtgroup" on my 2TB RAID array, and went into the Red Hat "virt-manager" and attached to my Xen domain, then told it to use that LVM volume as the "C" drive and installed Windows 7 on it. I'm using a LVM volume because at work with KVM, I find that this gives significantly better disk i/o performance in my virtual machine than pointing the virtual disk drive at file on a filesystem. Since both Xen and KVM use Qemu to provide the virtual disk drive to the VM, I figured that the same issue would apply to Xen, and adopted the same solution that I adopted at work -- just point it at a LVM volume, already. (More on that later, maybe).

Okay, so now I have Windows 7-64 bit installed and running, so I shut it down and went to attach PCI devices to it via virt-manager and... err. No. Virt-manager wouldn't do it. Red Hat strikes again, it claims that Xen can't do PCI passthrough! So I went back to the handy Xen Wiki and started figuring out via trial and error how to use the "xm" command line, where the 'man' page for xm doesn't in any way reflect the actual function of the program that you see when you type 'xm help'. So here we go...

First, claw back the physical devices you're going to use via booting with them attached to pciback. So my module line for the xen.gz kernel in /boot/grub/menu.lst looks like...

module /vmlinuz-2.6.34.7-0.5-xen root=/dev/disk/by-id/ata-WDC_WD5000BEVT-22ZAT0_WD-WXN109SE2104-part2 resume=/dev/datagroup/swapvol splash=silent showopts pciback.hide=(02:00.0)(02:00.1)(00:1a.0)(00:1a.1)(00:1a.2)(00:1a.7)(00:1b.0)

Note that while XenSource has renamed pciback to 'xen-pciback', OpenSUSE renames it back to 'pciback' for backward compatibility with older versions of Xen. So anyhow, on my system, this hides the ATI card and its sound card component, and the USB controller to which the mouse and keyboard are attached. I leave the other USB controller attached to Linux. I did not have any luck pushing USB devices directly to the VM, I had to push the entire controller instead, apparently the Xen version of QEMU shipped with OpenSUSE 11.3 doesn't implement the USB (or else I simply need to read the source). Note that you want to make sure your system boots *without* the pciback.hide before you boot *with* it, because once the kernel starts booting and sees those lines, your keyboard, mouse, and video go away!

Okay, so now I'm booted. I ssh into the system via the network port (err, make sure that's set up before you boot with the pciback.hide too!) and go into virt-manager (via X11 displaying back to my Macbook, again make sure you have some way of displaying X11 remotely before you start this) and start up the VM. At that point I can do:

  • xm list
and see my domain running, as well as log into Windows via virt-manager. So next, I attach my devices...

  • xm pci-attach win7 0000:02:00.0
  • xm pci-attach win7 0000:02:00.1
  • xm pci-attach win7 0000:00:1a.0
  • xm pci-attach win7 0000:00:1a.1
  • xm pci-attach win7 0000:00:1a.2
  • xm pci-attach win7 0000:00:1a.7
  • xm pci-attach win7 0000:00:1b.0
Windows detects the devices, loads drivers, and prompts me to reboot to activate. So I tell Windows to reboot, and it comes back up, but nothing's showing up on my real (as vs. virtual) video screen. then I go into device-manager in Windows and see what happened. The two USB devices (keyboard and mouse) show up just fine. But the ATI video card shows up with an error. I look at what Windows tells me about the video card, and Windows tells me that there is a resource conflict with another video card -- the virtual video card provided by QEMU. So I disable the QEMU video card, reboot and... SUCCESS! I now have Windows 7 on my main console with video and keyboard and mouse!

Windows Experience reports:

  • Calculations per second: 7.6
  • Memory: 7.8
  • Graphics: 7.2
  • Gaming graphics: 7.2
  • Primary hard disk: 5.9
Those are pretty good, quite sufficient for gaming, except for the disk performance which is mediocre because we're going through the QEMU-emulated hard drive adapter rather than a paravirtualized adapter. When doing network I/O to download Civilization V via Steam I also notice mediocre performance (and high CPU utilization on the dom0 host) for the same reason. We'll fix that later. But for playing games, we're set! Civilization V looks great on a modern videocard on a 1080P monitor with a fast CPU!

Okay, so now I have a one-off boot, but I want this to come up into Windows every time my server boots. I don't want to have to muck around with a remote shell and such every time I want to play Windows games on my vastly over-powered Linux server (let's face it, a Core I7-950 with 12GB of memory is somewhat undertasked pushing out AFS shares to a couple of laptops). And that, friends, is where part 4 comes in. But we'll talk about that tomorrow.

-ELG

Pushing a video card into a VM, Part 2

Part 1 Part 2 Part 3 Part 4 Part 5

The first issue I ran into was that my hardware was inadequate to the task. My old Core-2 Duo setup lacked VT-d support. So I went to the Xen compatibility list and found a motherboard which supported VT-d and upgraded my motherboard. At the same time I also upgraded my case to an Antec case that has a slot on the front for plugging in 2 1/2 inch drives. This was to make it easier to swap operating systems. Theoretically you can hot-swap, but I've not tested that and don't plan to.

Since I am inherently a lazy penguin (much like Larry Wall), the next thing I did was try the "virtualization environments". I found that XenServer was a very well designed environment for virtualizing systems in the cloud. Unfortunately it was also running a release 3.x version of Xen rather than the new 4.0 release, and did not implement PCI passthrough or USB passthrough to fully virtualized VM's natively. There were hacks you could do, but once you start doing hacks, the XenServer environment is not really a nice place to do them. So I moved on.

ProxMox VE is a somewhat oversimplified front-end to KVM and OpenVZ. It looks like a nice environment for running a web farm via web browser, but unfortunately it does not support PCI passthrough natively either. Again, you can start hacking on it, but again once you start doing that you might as well go to a non-dedicated environment.

Ubuntu 10.10 with KVM was my next bet. I *almost* got it running, but the VM wouldn't attach the graphics card. It turns out that was another issue altogether, but looking at the versions of QEMU and KVM provided, it appeared that Fedora 14 had one version newer (as you'd expect, since Fedora 14 came out almost a month later), so I went to Fedora 14 instead.

I got close -- really close -- with Fedora 14. But two different video cards -- an old nVidia 7800GT and a new nVidia GTS450 -- both ended up with error messages in the libvirtd logs saying there was an interrupt conflict that prevented attaching the PCI device. I ranted to a co-worker, "I thought MSI was supposed to solve that!" So I looked at enabling MSI on these nVidia cards and found out that... err... no. Not a good idea, even if I wanted, the cards generally crashed things hard if you tried. So I went back to the XenSource.com wiki on VGA passthrough again and followed the link to the list of video cards, and... err, okay. an ATI Radeon 5750 has been reported as running wiith Xen's VGA passthrough.

So, I swapped that out, and tried again with Fedora 14. This time the KVM module crashed with a kernel oops.

At this point I'm thinking, otay, KVM doesn't seem to want to do this. Xen, on the other hand, seems to have a Wiki and all documenting how to do this. So let's use Xen instead of KVM. The problem is that Xen is an operating system. It relies on having a special paravirtualized kernel for its "Dom0" that handles the actual I/O driver work. Red Hat claims providing such a kernel would be too much work and that they won't do it until the Dom0 patches are rolled into upstream by Linus. This despite the fact that Red Hat has patched their kernels to the point where Linus would barely recognize them if someone plunked the source to them on his disk, but it's that whole Not Invented Here thingy again, Red Hat invented KVM and was looking for an excuse to not include a Xen dom0 kernel, and there you go. I looked at downloading a dom0 kernel for Fedora 14, but then... hmm. Look. OpenSUSE 11.3 *comes* with a XEN dom0 kernel. So let's just install OpenSUSE 11.3.

OpenSUSE 11.3 is what I eventually had success with. But to do that, I ended up having to fight Red Hat -- again. But more on that in Part 3.

-ELG

Pushing a graphics card into a Xen VM, Part 1

Part 1 Part 2 Part 3 Part 4 Part 5

One of the eternal bummers for Linux fanboys is the paucity of games for Linux. This is, in part, because Linux is not an operating system, Linux is a toolkit for building operating systems -- and each operating system built with the Linux toolkit is different, but all of them claim to be "Linux". Well, from a game designer's perspective there is no such thing as "Linux" -- each of the variants puts files in different places, each of the variants has a different way of configuring X11, and so forth. And talking about X11, that's another issue. Mark Shuttleworth got a lot of heat for saying that desktop Linux was never going to be competitive as long as it was saddled with the decades of fail that are X11, when he proposed moving Ubuntu Linux to Wayland. But the only Unix variant that has ever gotten any traction on the desktop -- Mac OS X -- did so by abandoning X11 and going to their own lighter-weight GUI library that forced a common interface upon all programs that ran on the platform (except for ported X11 programs, which were made deliberately ugly by the Mac OS X11 server that ran on top of the native UI in order to encourage people to port them to the native UI). Linux fanboys might talk about how OpenGL over X11 isn't theoretically incapable of handling gaming demands, etc. etc., but the proof is in the pudding -- if it's so easy, why isn't anybody doing it?

So anyhow, one of the interesting things about the Intransa Video Appliance is that it looks like Windows if you sit down at the console... but behind the scenes, it's actually VMware on top of a Linux-based storage system. So why not, I wondered, just push the entire video subsystem into Windows via VT-d? I mean, it's not as if Linux user interfaces run any slower remotely displayed over VNC than they do locally, they're pretty light-weight by modern standards. So if you could push the display, keyboard, and mouse into a Windows virtual machine that was started up pretty much as soon as enough of Linux was up and going to support it, you could have a decently fast gaming machine, *and* have a good Linux development and virtualization server -- all on the same box.

So, I assembled my selection of operating systems and started at it. I assembled the bleeding edge of Linux -- Ubuntu 10.10,Fedora 14, Citrix XenServer 5.6.0, ProxMox VE version 1.6, and OpenSUSE 11.3, and set to work seeing what I could do with them...

Next up: Part 2: The distributions.

-ELG

Tuesday, November 2, 2010

Microsoft in a nutshell

It's no secret that Microsoft is a company in trouble. At one time they had a significant portion of the smartphone market, now they're an also-ran with single-digit market share. Their attempt to buy consumer marketshare in the gaming console market has generated some marketshare, but also significant losses. Their Zune Phone experiment lasted only two months before ignoble abandonment. The only things they have that make money right now are their core Windows and Office franchises -- the entire rest of the company is one big black hole of suck, either technologically, financially, or both. And while their market share in desktop operating systems is secure for the foreseeable future, with no viable competitor anywhere in sight (don't even mention Linux unless you want to cause gales of laughter, Linux on the desktop is a mess), Office faces a threat from OpenOffice. Plus, their very profitable Windows Server franchise, which accounts for a small percentage of their unit sales but a large percentage of their revenue, is steadily eroding as it becomes clear to almost everyone who isn't tied to Microsoft Exchange that Linux rules the world. Amazon EC3 runs on Linux, not Windows -- as does every other cloud play on the Internet. 'Nuff said.

Today something happened which epitomized this suck. I opened up an email in Microsoft Hotmail. At the top of the email, in red, was the following message: "This message looks very suspicious to our SmartScreen filters, so we've blocked attachments, pictures, and links for your safety."

The title of the email: "TechNet Subscriber News for November".
The sender of the email: technote@microsoft.com

Siiiiigh... even their own spam filter thinks they suck.

-ELG