Monday, November 15, 2010

Pushing a graphics card into a VM, part 5

Part 1 Part 2 Part 3 Part 4 Part 5

So here's the final thumbnail summary:

Hardware:

  • Video card #1: ATI 5750 (but the 5770 should work too and is slightly faster, but the 5750 was on the Xen compatibility list).
  • Video card #2: nVidia Corporation VGA G98 [GeForce 8400 GS] *PCI* card (BIOS set to use PCI as first card)
  • Intel(r) Desktop Board DX58SO -- not a great motherboard, but it was available at Fry's and was on the Xen VT-d compatibility list
  • Intel Core I7-950 processor
  • 12GB of Crucial 3x4GB DDR3 RAM
  • Hard drives: 2 Hitachi 7200 rpm SATA 2GB drives, configured as RAID1 via Linux software RAID.
  • Antec Two Hundred V2 gamer case to handle swapping OS's via the front 2 1/2" drive port.
  • Various 2 1/2" drives to hold the Linux OS's that I was experimenting with
Software:
  • OpenSUSE 11.3, *stock*.
  • Windows 7, *stock*.
By adding the PCI card, my Linux console remains my Linux console, and Xen properly starts up my Windows DomU. My configuration is now complete. I may extend my Windows LVM volume to 200G so I can install more games on it, but note that all of my personal files, ISO's, etc. live on Linux. Note that 5.9 is what the Windows Performance Index should be for that particular hard drive combo, so this Windows system is as good as most mid-range gaming systems performance-wise. I added the paravirtualization drivers for the networking and disk controller, but they didn't improve performance any -- all they did was reduce how much CPU the dom0 qemu was expending implementing virtual disk and network controllers. Given that I have a surplus of CPU on this system (8 threads, 3.2ghz), it's in retrospect no surprise that I saw no performance gain on the disk and network from going paravirtual -- all I did was free up more CPU for use for things like, say, video encoding.

Thoughts and conclusions:

One thing that was very clear through this entire process is that I'm very much pushing beyond the state of the art here. The software and hardware configurations needed for this to work were very twiddly -- there is exactly one (1) Linux distribution (OpenSuse 11.3) which will do it at this point in time, and there were no GUI tools for OpenSuse 11.3 which would create a Xen virtual machine with the proper PCI devices. Furthermore, the experimental Xen 4.01 software on OpenSuse is almost entirely undocumented -- or, rather, it has man pages provided with it, but the man pages document an earlier version of Xen which is significantly different from what's actually shipped with OpenSuse 11.3.

From a general virtualization perspective, comparing Xen, KVM, and ESXi, Xen currently wins on capabilities but only by a hair, and those capabilities are almost totally undocumented -- or worse yet, don't work the way the documentation says they work. Xen's only fundamental technological advantage over KVM and ESXi right now is its ability to run paravirtualized Linux distributions without needing the VT-x and VT-d extensions -- a capability which is important for ISP's with tens of thousands of older servers without these extensions, but becoming increasingly less important as VT-x is now everywhere except in the low-end Atom processors. Comparing my Xen installation at home with my KVM installation at work, both of which I have now used extensively and pushed their capabilities to their limits, I can see why Red Hat is pushing the KVM merger of hypervisor and operating system -- KVM gives you significantly greater ability to monitor the overall performance of your system, vs. Xen where 'xm top' is a poor substitute for being able to get detailed monitoring of your overall system performance, is significantly better at resource management since the same resource manager handles everything (core hypervisor/dom0 plus VM's), and the Linux scheduler can consider everything when deciding what to schedule, rather than having the Xen hypervisor out in the background making decisions about which Xen domain to schedule next based upon very little information.

In short, my general conclusion is that KVM is the future of Linux virtualization. Unfortunately my experience with both KVM and Xen 4.0 is that both are somewhat immature compared to VMware's ESX and ESXi products. They are difficult to manage, their documentation is persistently out of date and often incorrect, and both have a bad tendency to crash cryptically when doing things that they're supposed to be able to do. Their core functionality works well -- I've been running Internet services on Xen domains for over five years now and for that problem domain it is bullet-proof, while at work I am developing for several different variants of Linux using KVM virtual machines on Fedora 14 as well as running a Windows VM to handle the VSphere management tools, and it's been bullet-proof. But they decidedly are not as polished as VMware at this point, other than Citrix's XenServer, which lacks the PCI passthrough capability of ESXi and thus was not useful for the projects I was considering.

My take on this, however, is that VMware's time as head of the virtualization pack is going to be short. There isn't much more that they can add to their platform that the KVM and Xen people aren't already working on. Indeed, the graphics passthrough capability of Xen is already beyond where VMware is. At some point VMware is going to find themselves in the same position vs. open source virtualization that SGI and Sun found themselves in vs. open source POSIX. You'll note that SGI and Sun are no longer in business...

-ELG

2 comments:

  1. Beside netbook, some mid-range processor like the Intel Core2 Q8200 also lacks all virtualization support.
    http://ark.intel.com/Product.aspx?id=36547

    and graphic card IOMMU has to be enough complete to support virtualization.

    Checking compatibility list is definitively needed.

    ReplyDelete
  2. The Core 2 Q8200 was pretty much obsoleted by the Core I-series, it's two generations old now and you won't find it in any new machines. Basically every consumer machine you buy today will have VT-x support. VT-d support, on the other hand is rare, and as you point out the graphics card IOMMU has to support it too if you're going to push a graphics card into a virtual machine. Luckily I've tried three or four different consumer-grade ATI cards over the past few months and it appears that all current consumer ATI cards are sufficiently good to be pushed into a VM if your motherboard and processor have VT-d support.

    Of course, the "if your motherboard and processor have VT-d support" is the *big* if here. Which is why I later changed tactics on how I was doing this, and went to using a Windows 7 host and Linux guest rather than vice-versa...

    ReplyDelete