Sunday, August 11, 2013

The killer app for virtualization

The killer application for virtualization is... running legacy operating systems.

This isn't a new thought on my part. When I was designing the Intransa StorStac 7.20 storage appliance platform I deliberately put virtualization drivers into it so that we could run Intransa StorStac as a virtual appliance on some future hardware platform not supported by the 2.6.32 kernel. And yes, that works (no joke, I tried it out of course, the only thing that didn't work was sensors but if Viakoo ever wants to deliver a virtualized IntransaBrand appliance I know how to fix the sensors). My thought was future-proofing -- I could tell from the layoffs and from the unsold equipment piled up everywhere that Intransa was not long for the world, so I decided to leave whoever bought the carcass a platform that had some legs on it. So it has drivers for the network chips in the X9 series SuperMicro motherboards (Sandy/Ivy Bridge) as well as the virtualization drivers. So there's now a pretty reasonable migration path to keep StorStac running into the next decade... first migrate it to Sandy/Ivy Bridge physical hardware, then once that's EOL'ed migrate it to running on top of a virtual platform on top of Haswell or its successors.

But what brought it to mind today was ZFS. I need some of the features of the LIO iSCSI stack and some of the newer features of libvirtd for some things I am doing, so have ended up needing to run a recent Fedora on my big home server (which is now up to 48 gigabytes of memory and 14 terabytes of storage). The problem is that two of those storage drives are offsite backups from work (ZFS replication, duh) and I need to use ZFS to apply the ZFS diffsets that I haul home from work. That was not a problem for Linux kernels up to 3.9, but now Fedora 18/19 have rolled out 3.10, and ZFSonLinux won't compile against the 3.10 kernel. I found that out the hard way when the new kernel came in, and DKMS spit up all over the floor because of ZFS.

The solution? Virtualization to the rescue! I rolled up a Centos 6.4 virtual machine, pushed all the ZFS drives into it, gave it a fair chunk of memory, and voila. One legacy platform that can sit there happily for the next few years doing its thing, while the Fedora underneath it changes with the seasons.

Of course that is nothing new. A lot of the infrastructure that I migrated from Intransa's equipment onto Viakoo's equipment was virtualized servers dating in some cases all the way back to physical servers that Intransa bought in 2003 when they got their huge infusion of VC money. Still, it's just a practical reminder of the killer app for virtualization -- the fact that it allows your OS and software to survive despite underlying drivers and architectures changing with the seasons. Now making your computer work faster can be done without changing anything at all about it -- just buy a couple of new virtualization servers with the very latest fastest hardware and then migrate your virtual machines to them. Quick, easy, and terrifies OS vendors (especially Microsoft) like crazy because now you no longer need to buy a new OS to run on the new hardware, you can just keep using your old reliable OS forever.

-ELG

2 comments:

  1. Just a curiosity question, did you develop the StorStac based on the Centos distribution? I came into possession of an old VA110 and found your blog while searching for information on it.

    .--. .---..-..-..-..--. .-. .---..---.
    | \ \| | | \ / | || \ \ | |__ | |- | |-
    `-'-'`-^-' `' `-'`-'-' `----'`---'`---'




    ReplyDelete
    Replies
    1. DL, unfortunately Red Hat 6 / Centos 6 did not come out until roughly 2 months after we'd started development on Storstac 7.20, but we needed features of a later 2.6 kernel for some of the reliability work we were doing. So it was based on a Fedora release that roughly corresponded to what would become Red Hat 6, to the point where porting it to new platforms became an exercise in taking Centos 6 drivers and re-compiling them against the Intransa kernel. If you have a 110, it came with StorStac 6.x, which I believe was based on Fedora 8. I never got around to porting 7.20 to earlier versions of Intransa hardware because it was pointless by that time -- Intransa was clearly done.

      The VA110 was a pretty rudimentary platform. Intransa had created it pretty much as a last gasp to create a viable business model around selling storage for video surveillance given that their previous business model as a general storage company had failed miserably with the utter failure of sales of their "Building Block" system. That said, it worked and was reliable. I did a deep dive in the Intransa storage stack to write a document giving an overview of how it worked when we were selling that intellectual property, and it's a nice piece of software, far more coherent than the native Linux storage stack. A bit dated now, of course, but a bit of cleanup and performance work, a nice Angular-based responsive UI, and a new NoRAID mode for SSD's and it would be quite viable today, assuming its new owners was willing to spend the money to do those updates. There are some choke points that would limit performance with SAS3 and SSD's, but they are choke points that we knew about and there are provisions to multi-thread those points if it became necessary (we did all the locking but didn't turn on the multi-threading in those places because debugging multi-threading is a pain and we didn't need it with SAS1 and spinning rust).

      Sigh. Just feeling wistful. I've worked on so many great products that just didn't survive in the marketplace, either because they weren't marketed correctly or because the competition was just too fierce. Thinking about where I would have taken StorStac in a world of cheap SSD's is just another of those thought experiments on my part that will never happen.

      Delete