Thursday, July 28, 2011

Do Not Annoy The Geek

What brought that up? Well, two things. The first is that I'm updating my report on virtualization systems with a new round of testing. This new round of testing includes some entrants that didn't exist during the first round -- Scientific Linux 6 / Centos 6.0, Red Hat Enterprise Linux 6.1, Ubuntu 11.04, and OpenSUSE 11.4. I'll write that up as soon as I finish with OpenSUSE 11.4, the only one left to evaluate. The second thing that happened was that my Macbook Pro decided it didn't like to sleep or shut down cleanly, which annoyed me because, reading forums, it appears that the only way to fix it is to do a clean wipe and re-install from scratch, re-install my applications from scratch, then restore only data, not configuration info. Makes me feel like a Windows user. Which annoys the geek.

So, how did the round of virtualization evaluations annoy the geek? Well: SL/Centos, Ubuntu, and OpenSUSE can be downloaded from their respective web sites. Red Hat requires you to sign up for an evaluation. Okay, so I already have an account on their site from the *last* version of their software that I evaluated back in December of last year (the original RHEL 6.0 release), so I went back in and signed up again. And promptly got rejected. "We do not accept personal email addresses for evaluations." I.e., they only want corporations to evaluate their software. I shrugged, changed my email address to a .com address that I registered over ten years ago but have never used (but which forwards email to my personal inbox), and downloaded the software. But I was annoyed.

So why is this important? Simple. I am not a product manager. I don't make final decisions about what to buy. But I am, more often than not, the person that the product manager or the IT manager comes to when they have a problem and want to know, "what technology do I need?". People like me are called influence leaders, because our combination of technical skills and communications skills, our understanding of both technology and actual business issues, means that folks who have one but not the other come to us to decide what the next round of innovation deployment is going to be. My estimate is that I've personally influenced or caused to happen over $5M/year of purchases over the past ten years. And that is true of geeks in general -- geeks may not be the people who sign the checks, but they're the people who evaluate the technology and tell the people who sign the checks what to buy.

So anyhow, back to Red Hat. That question on their web site is completely anti-geek. See, while evaluating virtualization technology is part of my job description (seriously -- it's there in black and white), for the most part I do it as a private citizen, not as a qualified lead. That's because otherwise I get bombarded with spam emails and phone calls asking me to buy stuff -- but I don't buy stuff, I evaluate stuff. Other people in my company buy stuff, and no, you are not getting their name or phone number from me because then they'll come to me and ask me questions about you that I can't answer until I finish evaluating your product. But Red Hat's marketing department is looking for qualified leads -- potential check-writers that they can bombard with phone calls and spam emails to sell stuff. But you won't get very far bombarding me with emails and phone calls, because I am a technical leader, I don't write checks and you'll do nothing but annoy me if you bombard me with phone calls and emails. If you want sales generated from me, you'll have to do it the old-fashioned way -- by letting me do a technical evaluation of your product for a month or so to see if it'll fulfil our company's needs better than the competition.

So how important is this sort of geek cred? Well, let's look at Sun Microsystems. Back in the day, they had geek cred out the yazoo. Geeks lusted after a Sun workstation on their desk, and fumed at the notion that they had to settle for a lowly PC, because Sun workstations with their BSD-based SunOS were geek nirvana with full geek programming toolkit including a "C" compiler and programmable shell scripting environment. Then Sun decided, upon releasing AT&T System V.4 as "Solaris", that they would no longer include the software development tools with their workstations. That was the beginning of the end for Sun, it just took fifteen years for it to catch up with them. The geek lust eventually moved to PC's running Linux. Linux went up and up and up, Sun lost market share to PC's running both Windows and Linux year after year, and to Apple once Apple moved to Unix and started including development tools to attract the geeks to their platform, until finally Sun got acquired for the few competitive products they had left. Sun made a last-ditch effort with OpenSolaris to regain geek cred, but it was too little, too late -- Linux had such a lock on geek mindshare that there was no way Sun could regain enough mindshare to make a difference. When geeks lust for Unix servers and workstations now, they lust for well-endowed Linux servers and MacOS workstations, not Sun ones -- but that was *not* true in 1990, in 1990 if you were a geek, you had serious Sun lust.

Of course, this points out two things about geek cred: a) it takes a long time to build up to the point where it makes or breaks technology companies (but in the end it always does), and b) once you lose it, getting it back is a real problem. Well, and c) if your product is pretty much the only thing that solves a particular problem, you don't need geek cred (thus why the IBM 370 architecture machines still sell plenty despite being utterly unfashionable in geek circles). So if you're a marketing type looking for the long term, what do you need to know?

  1. Provide evaluation copies of your software without jumping through too many hoops. Basically, anything that involves delay is too many hoops for geeks, geeks are impatient and want instant gratification. At one employer they made it so difficult to download an evaluation copy of the software that virtually nobody did so -- the only way you could do it was to basically become a qualified lead and have one of their salesmen call you. But that annoys the geek. One reason why my employer uses VMware ESXi heavily is because we in the technical staff could download the evaluation version of the software, see that it worked quite well for our projects, and then, and only then, did we go to our superiors and have them negotiate whatever licensing was needed to make our projects work. If VMware had made it hard to get evaluation versions of their software, that would have never happened -- we would have used one of the Open Source virtualization platforms despite their limitations.
  2. Describe what your product does on your web site. Seriously. I can't count the number of times I read some breathless but vague marketing hype on the web site, try to infer whether it solves the problem I'm trying to solve, download the actual eval product... and meh. It didn't do anything like what I thought it was when I read the breathless hype on the web site. Tell me what problem you're trying to solve with your product. Yes, I know you don't want to limit yourself, but please. Give me a clue *without* having to waste my time, because wasting time annoys the geek!
  3. Provide technical documentation on your web site. At a previous employer I once wrote a blog entry for a customer-facing company blog that described, with some technical detail, exactly what it is that our product did. That blog entry got kiboshed by a PHB because "it lets out too many details of our secret sauce". Thing is, geeks don't like secrets. They want to know what it does and how it does it. If you don't tell them, they'll decide you don't have anything but smoke and mirrors and go elsewhere. It always frustrated me at that company that we had a cool product, but nobody knew it -- because we refused to tell it. But there's no such thing as secrets in technology. Everything can be reverse-engineered. So keeping technical details close to the vest doesn't accompany anything but annoying the geek -- and hurting your street cred, since it makes you look like a fly-by-night peddling smoke and mirrors.
  4. Provide a programmable interface if reasonable and possible. End users won't care, of course. But one reason why Linux (and eventually MacOS X) won out over the other Unixes was because they were very open to programmers. Linux of course comes with the complete source code, while MacOS has source available for many components of the system plus has a free programming tool set available for it (XCode) as well as having that all-important Unix shell prompt with all the geeky Unix tools. Meanwhile, the other Unixes required you to fork over significant sums of money to get their programming toolkits. One of the things Microsoft has done right recently is to include PowerShell with all their systems and export most of their OS API's as PowerShell objects. It has decidedly helped their reputation in geek circles -- you'll notice that while you still have some geeks saying "Microsoft is Teh Eeeevil", most have moved to a position of neutrality on the whole subject of Microsoft. (Disclaimer: This is being written on Firefox running on Fedora 15 running under VirtualBox on a Windows 7 host :). I still wouldn't deploy a Windows server unless you paid me a significant sum of money to do so, but people who put Linux on the desktop because Microsoft is 'Teh Eeevil' are just being twits.
  5. DON'T CRASH! Seriously. Crashes annoy geeks even more than missing features. If there's a choice between having a cool feature and having a reliable product, err on the side of a reliable product. Nobody ever lost customers by having a reliable product, but once you lose cred by getting a reputation for having a crashy product... well.
  6. And -- participate in geek community. There were some tools at one employer that we could have contributed to the Open Source community that would have worked quite well at building geek cred, but management was totally against it because it would "divulge corporate secrets". Well, that company is out of business now. Openness sells. Secrecy doesn't. If you can't out-innovate your competition once they figure out your "secrets", you're in the wrong business -- because the technology industry moves so fast that any "secrets" you divulge *should* be obsolete long before any competitor can take advantage of them. If not, you're as doomed as the startup I worked for that took three years to create their first product, which was two years obsolete by the time they finally got it out the door late, slow, and buggy. You have to move *fast* in the technology industry, and if you can't out-run any "secrets" you divulge to the geek community, you're going to be out of business soon anyhow.
So anyhow, just one of those things to keep in mind if you're a marketing person wondering how to get market share. Find some geeks and ask them what annoys them about your product or about how your company sells your product. Then fix it. It won't create short term sales figures, but if you're wanting a long career for a stable company, that's how to do it -- something that Red Hat needs to remember, in their current rush to go complete pointy haired boss on the geeks when it comes to evaluating their enterprise products.


Wednesday, July 20, 2011

Accessing raw drives from VirtualBox

In my previous virtualization series, I avoided VirtualBox because there was no GUI support for accessing raw drives, and I have a pair of 2TB Linux RAID drives that I wanted to use a Linux VM on my Windows host to assemble and export to my network as a set of network shares. However, when I wanted to see the Gnome Shell functionality of Fedora 15, I had no choice but to use VirtualBox -- it's the only virtualization solution out there that currently supports OpenGL acceleration for Linux virtual machines.

So given that incentive, I actually did it. The secret is VirtualBox's VMDK support for importing VMWare virtual drives. The VMDK header format allows creating a virtual disk that is actually just a pointer to a physical drive. So, I used the directions on the VirtualBox site to create two vmdk files pointing at physical drives and then add them to my freshly installed Fedora 15 virtual machine as "existing" drives.

The first thing to note is that on Windows Vista or Windows 7, you MUST run VirtualBox as Administrator to access physical drives. Otherwise your VM simply won't start (and you can't even create your virtual VMDK's if you're not doing it as Administrator).

The next thing to note is where Oracle puts all the binaries you'll need. So you'll need to pop open a Terminal window as the administrative user (i.e. right-click and "run as Administrator") and:

C:> path "%path%;C:\Program Files\Oracle\VirtualBox"

Then you'll need to find your virtual machines. For me:

C:> cd "\Users\eric\VirtualBox VMs"

did the trick.

Now you can run the commands. I knew from my VMware Player experiment what my two physical drives were identified as in Windows. They were identified as drives 0 and 1, while my boot drive (which plugs into the front) is identified as drive 2. So I went ahead and did it. Otherwise you may need to do a bit of poking around to figure out which drives to push. So anyhow:

C:> VBoxManage internalcommands createrawvmdk -filename Fedora15/Disk0.vmdk -rawdisk \\.\PhysicalDrive0
C:> VBoxManage internalcommands createrawvmdk -filename Fedora15/Disk1.vmdk -rawdisk \\.\PhysicalDrive1

Then I right-clicked my VirtualBox icon and ran it as Administrator (you will probably want to go into its settings via right-click on its icon and make that permanent for left-double-clicks too), clicked the Fedora15 virtual machine, then its Settings icon, hit "+", and added the two hard drives as "existing" virtual hard drives. I then started the VM and... success! I saw my two drives in /proc/partitions.

Well.... *almost* success. Fedora 15 didn't activate my arrays on that first boot. So mdadm to activate them, vgscan (to detect my volume groups after activating the RAID arrays with mdadm), then lvchange -ay to activate the detected logical volumes. But once I did all that I could mount my filesystems and add them to /etc/fstab, and Fedora 15 properly assembled everything on my next reboot.

So what's the downside of VirtualBox so far? Well... it's hard to say. One thing I *have* noticed is that YouTube videos do not play properly. Multimedia playback in virtual machines is always problematic because of timing jitter, and I suspect that being on a Windows host with its really lousy clock system doesn't help. But then, I can just view multimedia on the Windows host -- that's why I have it, for games and stuff that doesn't work well under Linux. Other than that, everything else seems to be working... and you can't beat the price: Free (for personal use).


Monday, July 18, 2011

Re-inventing the Linux desktop

My opinion of the Linux desktop is pretty much well known by now -- I believe I mentioned "Window 95 as re-implemented by a Soviet Union that never fell"? A.k.a. clunky, incoherent, and obsolete? But recently two distributions have been released with a re-imagining of what the Linux desktop should look like. So today I installed Fedora 15 and Ubuntu 11.04 into Virtualbox on my Macbook Pro (into Virtualbox because it's currently the only virtualization environment with 3D support for Linux), and examined the two new systems -- Gnome 3's Gnome Shell, and Ubuntu's Harmony.

Gnome 3 was the first one I looked at, and by far the most revolutionary. Everything in the new Gnome Shell is set up to be doable with a couple of mouse swishes and one or two clicks. A swish to the upper left corner of the screen does three things -- does an Expose'-like scale of your windows so you can choose which window you want to activate by just clicking on it, sucks in a dock from the left side of the screen, and sucks in a screen list from the right side of the screen. You can grab a window and move it to the next screen in the screen list (there's always one blank screen at the end of the list -- sort of like the iPhone's applications screens), or you can click on the word "Applications" towards the upper left of your screen and see a list of applications in a format somewhat like an iPhone's application chooser. It is all easier to use than it sounds -- it really does make the whole thing swish swish swish point and click easy.

If you have applets running, like gDesklets, you can get to them by going to the bottom right corner of the screen. A sort of fuzzy menu bar then rises up from the bottom.

My general conclusion: Gnome 3 is currently incomplete -- it's barely configurable at all for example -- but it puts together the best ideas in UI's that have come down the pike over the past few years into an easy-to-use whole. My workflow falls out of the way Gnome 3 works naturally. If I want a screen for my browser windows, for example, swish to top left, swish to next screen on the right and click it, swish to left and select my browser on the dock, and voila, it pops open on the new screen and *another* blank screen is created for the *next* thing I want to do. Close all the windows on a screen, and it goes away, so I always have just the screens I need for my workflow -- no more, no less. I spent some time today doing software development with this system, and the usability compared to traditional Gnome is astounding.

Next up was Ubuntu's Unity. That, alas, turns out to be a disappointment. While Gnome 3 re-imagined the world to the point where some of the rumored Windows 8 functionality is going to be a clone of already-existing Gnome 3 functionality (like the iPhone-like application chooser), Unity simply attempts to clone Mac OS X without seeming too obvious about it. The problem is that simply moving the dock to the left side and modifying GTK+ to move application menus to the top like MacOS (except active and context sensitive, can't forget that!) isn't enough to make a quantum leap in functionality. Frankly, it ends up looking a bit of a mess. Gnome 3 re-imagined, Unity cloned, and like most clones, the clone isn't the equal of the original.

So: Does this mean the Linux desktop is usable now? Am I going to abandon my Macbook Pro and run a native Linux desktop again? Uhm... not hardly. I have a major investment in MacOS professional music software for which there is no Linux equivalent, and I still have significant difficulties viewing multimedia-based sites with Linux. Part of that is Microsoft and Apple's fault -- they release all these tools for "free" that produce (and view) multimedia content in their own proprietary formats like Quicktime and WMA, and don't release the viewing tools for Linux. I also can't read my corporate email using Linux -- neither of Evolution's Exchange plugins will handle proxied Exchange servers. That's sort of important too. Still, the fact that there is now a Linux UI which is actually innovative rather than a crude clone of other people's ideas is a sea change in an OS development process which all too often has ignored the desktop in favor of server optimizations. I don't know what happens next, but I suspect it'll be an improvement. Of course, given where the Linux UI started -- as an incoherent mess (inherited from MIT X11) -- that's sort of faint praise. So it goes.


Thursday, March 24, 2011

Virtualization solution #3: Windows Virtual PC

Note that my Windows platform is Windows 7, which basically includes a free Windows XP virtual machine for Windows Virtual PC. But I was wanting to add physical hard drives. And... err... no. It won't do it.

My basic take: Windows Virtual PC works well if you're wanting some Windows "sandboxes" to play in. The drop-and-drag integration in particular is fairly impressive. But for what I want -- to create a virtual machine that's given ownership of a bunch of disks in order to software RAID-6 them and divvy slices of the resulting arrays out via iSCSI, CIFS, and AFP, i.e., to create a virtual storage appliance -- it simply lacks the basic functionality I need. Thus far my 64-bit Scientific Linux 6 is working quite well as a JBOD-manager with VMware Player... and there's not many virtualization solutions that can do that, and nothing other than VMware Player on the desktop.


Sound, Flash, and VMware Player

This is some notes on how to get sound and flash working on a Scientific Linux 64-bit guest ( rebranded Red Hat Enterprise Linux 6 ) running inside VMware Player on Windows 7 64-bit:

Flash: Note that Chrome for 64-bit Linux does *not* include the integrated Flash player that all other platforms have. You'll need to download 64-bit Flash beta from Adobe. At the time of this writing, that's at Adobe Labs. Once you extract the tgz file, you'll be left with a file "", simply copy it to /usr/lib64/mozilla/plugins , restart Chrome or Firefox, and you're set.

Sound: VMware's sound system crashes with the default Ubuntu / Red Hat sound configuration. This is apparently because VMware doesn't bother emulating all pieces of the hardware they say they're emulating, and when ALSA touches the missing bits, VMware disables the sound device. That's easy enough to fix though. From the Gnome menu, go to System->Preferences->Sound. In the Sound preferences, click on Hardware. Change the profile at the bottom from "Analog Stereo Duplex" to "Analog Stereo Output". Then on the little icon of the speaker on the bottom margin of the VMware Player, right-click it and select "Connect". After about a minute it'll turn green and you'll be able to play sound again.

Why this happens: Though the Ensoniq device emulated is capable of stereo duplex operation (i.e., both recording and outputting at the same time), I know because I actually had one of the physical cards back in the day and used it for that purpose along with a multitrack recorder program, VMware's emulation of the device is not capable of such and thus you must disable that capability. Unless you were intending to record audio within the Linux virtual machine (*not* recommended, VMware's timing is not sufficiently good to get good results there), this has no actual effect -- you can still play Flash videos from Chrome (and Firefox presumably) and hear the sound.

So if you're running Windows 7 on your physical hardware because you need the graphics performance for games and aren't willing to do the dual-card IOMMU hack with Xen that I demonstrated previously (or your hardware simply doesn't have IOMMU support, or you're not willing to use the latest bleeding-edge OpenSUSE as your platform), you can still have the far more secure web browsing environment of Linux available, and with VMware Player you can give Linux any additional drives beyond your boot drive for Linux to manage. Linux works far better as a server than Windows 7 does -- it can provide CIFS, AFP, iSCSI, and do it all on a much better software RAID stack than Microsoft's, as well as using LVM to manage that space, for which Windows 7 has no equivalent. And VMware's Unity system actually works pretty well with SL6/RHEL6 on Windows, although not as well as it works in VMware Fusion on MacOS (the issue being that the little Unity menu icon gets put into the screen menu bar on MacOS and has a native look and feel, while shows up as a clunky little usually-invisible icon above the Start menu on Windows). Which means that you can mix Linux Chrome windows, Windows IE windows (ick! But there's a couple of applications I need for customer support that requires IE plugins that don't exist for any other browser), Linux shell windows, and hoary old Outlook all on the same screen and manage them using the normal Aero Peek icons at the bottom or, if you have a Logitech mouse with the Logitech drivers installed, by assigning one of the side buttons to Window Switcher (their clone of Apple's Expose'). And unlike earlier versions of Windows, Windows 7 is stable -- in fact, I've never managed to make it crash. It's just very annoying... but that's why Apple is still in business, after all. Because Apple makes computers that are not annoying. For a price. A big price, alas...


Tuesday, March 8, 2011

Virtualization solution #2: VirtualBox

So the next piece of virtualization software that I was going to try out is VirtualBox. Remember, my Linux is installed on an entirely separate hard drive pair, that I mapped into VMware Player as two drives then installed Scientific Linux 6 onto one of the RAID arrays previously configured on that drive pair for that purpose (a 20GB RAID1 pair). 'grub' handles that situation just fine, it skips the RAID header and loads the Linux kernel and does its thing. At that point, I can see the remaining 1.8 terabytes of RAID'ed LVM volumes.

So I fire up VirtualBox and go to create a virtual machine via its GUI and... err... it doesn't allow me to assign physical drives to my VM. Which is one of those "WTF?!" moments, because the underlying QEMU that VirtualBox is based upon certainly has the *capability* to add physical drives into a virtual machine, but a bit of Googling around finds that you must do some cryptic command line hacking to make VirtualBox do it. The GUI won't do it.

At that point, realizing that VMware was point and click and ridiculously easy to set up and did what I wanted it to do without said hacking, I said "F*** that" and uninstalled VirtualBox. It may be that VirtualBox could perform better than VMware Player. But my time is more valuable than any tiny increment of performance that VirtualBox could potentially give me compared to VMware Player.

Next up: I check out Windows Virtual PC and see what I can do with it. For one thing, it'll be cool to try Windows XP Mode, even if it turns out not to be useful for virtualizing Linux...

-- ELG

Sunday, March 6, 2011

Bizarre Windows behavior

Some bizarre things I've seen with Windows 7 Ultimate thus far:
  1. No loopback device. I had an ISO of Office 2010 on my network share, I copied it over to Windows 7 and... err... now what? I ended up exporting it to the Linux virtual machine as a virtual CD, mounting it in the Linux VM, then re-exporting it via Samba to Windows as a share.
  2. Windows 7 won't browse my workgroup. No way, no how. I can go directly to the Run prompt and type in "\\linserver\office" and get my Office share, but it won't automagically populate its network neighborhood thingy in the left margin of Windows Explorer with my workgroup shares. It's not that Samba is misconfigured. My MacBook properly populates *its* left margin with the Samba server and expands to show the shares when I click on it, as well as showing the share I exported from Windows 7.f
  3. Windows 7 won't do a thing when I click "Map Network Drive". Nothing. Nada. I remember that in the Xen installation of Windows 7 it certainly worked fine, it popped up a dialogue and all that, but now that it's native nothing -- zero -- happens when I click on that item.
  4. Probably related to Homegroups. This is the only Windows 7 system on my network, so Homegroups is utterly useless. I attempt to change it and... nada. Nothing. Won't let me leave the homegroup.
Computers are supposed to be deterministic. Windows 7 isn't -- its behavior appears to be semi-random, operating upon a logic that Boole and Von Neumann would not have recognized. So have you had weird things happen with Windows 7? Curious penguins want to know!


Saturday, March 5, 2011

Windows virtualization software

Previously I mentioned three bare metal hypervisor solutions -- Xen, KVM, and VMware ESXi -- and what was required to push through a video card to a WIndows VM living on those hypervisors. I have since managed to get that working with VMware ESXi, and I suspect if I install the very latest Fedora 14 host it'll work with KVM too.

That works fine with a desktop machine or rack server where I can mount multiple video cards into the box. The performance of Windows 7 virtualized is indistinguishable from Windows 7 raw. Here are my scores for Windows 7 raw on that box, with a Seagate Momentus XT 7200 RPM boot drive (due to the front-loading slot on my case that allows easy swapping of OS drives):

  • Processor: 7.5
  • Memory: 7.6
  • Graphics: 7.3
  • Gaming graphics: 7.3
  • Primary hard disk: 5.9
Compare with the virtualized numbers.

The problem, however, is when you want to go mobile. You simply can't add a second video card to a laptop. So if I want to play games on one of those new Dell Sandy Bridge desktop-replacement laptops, I need to run WIndows native, and come up with a way to have Linux also running and handling my years of accumulated data, all of which is in a Unix filesystem tree and cannot simply be copied into Windows. I preferably want this Linux to be running on a raw partition -- not on a file within a Windows partition -- and it has to be able to access raw Linux-formated USB and SATA drives. So, let's look at the first candidate... VMware Player

VMware Player is VMware's entry level desktop virtualization program. At one point in time VMware player would only start up virtual machines that had been created by VMware Workstation, but now it's an almost full version of VMware Workstation with various functionality like snapshots stripped out and with a limitation of only four cores allowed. VMware Player is "free" -- it's free for personal use, if you want to deploy it in a corporate environment you can license it for a fairly trivial per-seat fee (quite trivial, it'll be lost in the noise of your IT budget).

The test machine is my Xen server with Windows 7 installed as described above. I installed VMware Server on it without any problem. I created my Scientific Linux 6.0 virtual machine with no problem, giving it 2 gigabytes of memory and a 16gb root filesystem. VMware Tools installed easily into SL6 and allowed me to treat the Linux "X" desktop as if it were just any other Windows window, I could click into it, my mouse pointer could be moved outside of the window while I was typing into Linux program, and so forth. Adding the two 2TB SATA physical hard drives to the virtual machine was as simple as point and click, though the VM had to be off to do so because VMware Player's hot-plug functionality apparently does not work with physical drives. Once I booted SL6 up, it saw the two 2TB drives and assembled the Linux software RAID arrays on it automagically, though I had to do vgchange -a -y to get SL6 to recognize the LVM volumes on the RAID arrays.

So how fast is access to those two SATA drives? On a subsequent reboot of my Linux VM, a RAID check got fired off. The two drives were being read at 105MB/sec apiece -- 210MB/sec total -- and it used less than 20% of one of my eight cores for VMware to virtualize this. My take on it is that VMware Player's fake SCSI device takes a fair chunk of CPU to virtualize, but modern multi-core CPU's are so bleepin' fast that you won't even notice (which I didn't, until I went to see).

The final thing I wanted to do was to export a NTFS-formatted volume to Windows via iSCSI. Windows 7 has Microsoft's iSCSI initiator built in. I gave both my Windows machine and my Linux VM fixed addresses (using the bridged mode for the Linux VM's virtual network card), and installed the iSCSI target daemon and utils with 'yum install scsi-target-utils'. Then I added the already existing logical volume /dev/datagroup/win7 volume to /etc/volumes/targets.conf (see that file for exact format of what you need to add) and started up the daemon with "service tgtd start". Then I went to the Windows Administrative Tools (you can get to them from the the Start Menu if you've configured them to appear there, or from the Control Panel), selected the iSCSI Initiator, told it to scan the IP address of my Linux VM, and voila, it popped up there and as a drive letter in the Windows Finder. Easy peasy! The only thing to remember is to poke a hole for iSCSI in both the Linux and Windows firewalls, or it doesn't work :). (Yes, been there, done that, heh!).

The final test was to attach a USB hard drive to the system and export it to my MacBook Pro via iSCSI and use it as a Time Machine device. When I attached the hard drive to the system, it popped up in Windows, but clicking Virtual Machine ->Removable Devices showed the new drive, and allowed me to add it to the virtual machine. I then added it to targets.conf and told the tgd daemon about it, then went to my iSCSI initiator on the MacBook (the globalSAN iSCSI initiator) and added it, then used Disk Utility to format it as a Mac volume. Then I went to Time Machine and told Time Machine to use it for backup, and... voila. It started backing up at about 20mb/sec -- or about 50% of theoretical USB2 speed, not too bad considering this is being done via WiFi, not a direct connection, and the iSCSI target is running in a virtual machine, not directly on the hardware. A copy in Windows of a 4GB file to a similar USB drive ran at 26mb/sec, and that should go faster than Time Machine writes because it's a big sequential write, not a lot of smaller files. So now I have the equivalent of one of those expensive Time Capsule thingies, except that a 2TB Western Digital drive in an external USB case costs a *lot* less! Why an external USB? Simple -- so if I ever have to restore my MacBook Pro after a disk crash (which has happened before), I can unplug it from my big server and plug it directly into the MBP for MacOS to restore the system back to pre-crash state.

So... it's clear that VMware Player will do everything I want it to do here. There's two more options to look at before calling this competition done, however: Oracle's VirtualBox, which recently released a brand new version (version 4.0.4), and Microsoft's own Windows Virtual PC, which doesn't officially support Linux but which has been made to do so. More on those later...


Friday, March 4, 2011

Surviving with Windows

Even the most fervent Linux penguin may be required to use Windows for some purpose or another. For example, I have been doing a lot of work with VMware VSphere lately. And VSphere Client runs on... err... *WINDOWS*. Then there's games. Linux gaming is a non-starter due to the fact that the "X" Window System simply doesn't have the performance to do games. Mac gaming is better now that Steam has arrived, but Windows is still "the" PC gaming platform for most games.

So if you have to use Windows, what makes it less stressful for someone who prefers more elegant operating systems than the 16 years of hacks and kludges that is Windows today? I'll make a brief list here...

  1. Microsoft Security Essentials. Don't even think of running Windows without an antivirus. This program is free, works as well as the others, and does *not* spam you or install spyware on your system like many of the other antivirus programs do.
  2. Google Chrome. Internet Explorer on Windows 7 is a lot more secure than in days of yore, but Google Chrome is the most secure web browser for Windows, period.
  3. Microsoft Office 2010, Standard version. It drives me batty with its incoherent user interface but Outlook still is the best here. However, if you do *not* need compatibility with Exchange Server and its calendar, install OpenOffice and Thunderbird instead -- their user interface is much better than Office 2010's.
  4. Apple Quicktime. For playing Quicktime videos. Duh.
  5. Foxit Reader for Windows. For reading PDF files, which is what everybody's documentation is sent out as nowadays. Do *NOT* install Adobe's own PDF reader, it is evil, it has a security breach every other day.
  6. On the other hand, there's no alternative to Adobe's own Flash Player. Though thankfully Chrome doesn't need it, but IE will, so go ahead and install it -- using IE, because the installer will complain that Chrome already has Flash (duh).
  7. GNU Emacs for Windows. Because sometimes you need to edit text files, and Emacs is the best way to do that.
  8. Xmarks. The best way to keep all your bookmarks in sync between Chrome, IE, and any other computers you have.
  9. Evernote, to write things like shopping lists and todo lists and keep your notes in sync with your smartphone, your tablet, and the cloud.
  10. Steam client. Of course :).
To be continued...

Note one of the things I did *not* mention: Cygwin. The reality is that it will constantly annoy you due to the fact that the Unix API doesn't map well onto the Windows API. You're using Windows, not Unix. You're much better off learning how to use the native Windows tools like Powershell. They will annoy you too, but not as much as trying to shove a square peg into a round hole.


Wednesday, March 2, 2011

Disk encryption and LUKS

One of the interesting features of recent Linux distributions such as Ubuntu is LUKS, which allows you to encrypt (most of) your data on the hard drive. So the question I was asked is: How secure is LUKS?

Let's look at the Ubuntu implementation. It uses AES-256 to encrypt the disk volume using an appropriate cipher feedback mode to deal with frequency attacks and other such attacks against statically encrypted data. First, is the cipher secure? The answer there is, yes. AES (Rijndael) has been subjected to extensive cryptanalysis both as part of the AES cipher competition and afterwards, and there are no known compromises against full AES. The reality is that even AES-128 would be sufficient for this purpose, because the weakness of the Ubuntu implementation lies elsewhere: the passphrase. A NSA employee once stated in public that they didn't care anymore about how well encryption algorithms worked, because there was an infinity of methods to obtain the passphrases of anybody they really wanted to spy on.

Remember, a cipher is *not* a cryoptosystem. A cryptosystem consists of both the cipher, and the software required to feed it keys and data. In this case, the biggest weakness is in the keystore (stored in the header of the volume) and the method of securing it. The keystore is secured by a passphrase. Brute force dictionary attacks against the passphrase to decrypt the keystore might work, but usually won't if you chose a sufficiently complex passphrase incorporating non-word "words". The most likely attack here is either a passphrase sniffer injected into the system or a phishing exploit where a prompt is put up for the passphrase, but it is not by the login process, it is by software previously injected into the system via other means (typically a network-based exploit that then uses a privilege elevation exploit to gain permission to insert itself into the system boot sequence at the correct place).

Or, black bag cryptanalysis could be used if done by a determined organization -- a hardware keystroke scanner placed in your system, a pin mike pointed at your keyboard that then can be used to determine what keys were pressed based upon sound (each key has a subtly different sound when pressed, and which keys were pressed can be determined via frequency analysis for your language and assigned to the appropriate sound given sufficient keystroke sounds recorded), a pin camera can be mounted somewhere aimed at your keyboard to record your key presses. If you cannot guarantee that your system and computing environment has been physically secured, no cryptosystem is going to be sufficient. Thus the various exploits against ATM machines, where the machines are physically compromised (via scanner equipment physically added to them) to gain access to ATM card codes and PIN numbers for later use by thieves.

The Ubuntu setup, in other words, is sufficient for dealing with the issue of casual theft of computer equipment by random thieves -- they will not have previously captured passphrases thus will be reduced to brute force dictionary attacks upon the keystore encryption, which should fail as long as you choose a sufficiently complex passphrase -- but is insufficient to deal with a determined attack by someone who is willing to go to the trouble of rootkit'ing or black-bagging you. In particular, it is useless for dealing with network-based exploits, which occur after the passphrase has already been entered and can extract your data via the network accordingly. So it is useful, but adjust your expectations -- it is not going to stop a determined attacker (as vs. a casual theft of your drives or computer).

So, could this setup be made more secure against even passphrase interception attacks? Yes and no. You would need to load the Linux kernel and all pre-boot software from secure read-only media and also have the keystore reside there, then physically secure this media in some location other than with your physical hardware. A dongle in your laptop bag, for example, is not sufficient. You would need to have this media physically secure in your presence at all times to avoid having it compromised by black bag attacks, and have some method of physically destroying it if impending rubber hose cryptography seems likely and the data in question is so sensitive that the repercussions of it being decrypted is dire. But even there, you're still susceptible to ordinary network-based attacks that trick you into installing malware onto your drive or which exploit holes in your network software and which then steal your data via the network.

The reality is that the only computer that is completely and utterly secure is one located in a vault with no network access. Unfortunately, said computer is also not very useful for our day-to-day computing needs. The LUKS setup sufficiently deals with the problem of casual theft of computer equipment but will not stop a determined attacker with the resources of a major criminal syndicate or government behind it. It is a reasonable compromise between security and usability. So adjust your expectations accordingly.


Friday, February 18, 2011

User interfaces and the Office 2010 problem

Microsoft has some excellent user interface guidelines. The committee that wrote these guidelines made many of them unnecessarily complex and verbose, and I disagree with quite a few of them because I feel programmers will abuse them to do things that shouldn't be done in user interfaces, but by and large, I believe that if you keep these guidelines in mind, you will have a better product.

How, then, do we explain the mess that is Microsoft Office 2010? I can't. Take the mess that is Outlook, for example. Okay, so you want to add an email account. What's the first thing that Outlook tells you to do? Err.... EXIT OUTLOOK! And that makes sense because... err... why?

Yes, I know someone's going to pipe in with technical reasons for why Microsoft does this. But my point is, the user doesn't care. If Microsoft wants to keep a separate database with email accounts so that any email client they support (Outlook, Windows Live, or any future client) can access it, fine and dandy. But don't require people to run another program just to set up their email accounts when they're already in their email program.

Then there is the horrifically slow speed of Outlook 2010. I'm running Office 2010 on a 3.07ghz quad-core Intel Core I7 Extreme 950, which is not a slow processor -- indeed, it's one of the fastest consumer processors on the market. My Windows Experience indexes are near the top for everything except disk i/o, where it's constrained by the mediocre limits of a 7200 rpm SATA drive. I also have 12 gigabytes of RAM. But Outlook 2010 is always lagging. I have my Macbook Core I7, a dual-processor Core I7 at 2.6ghz, also. It has a 5400 rpm SATA drive. The Macbook is running Apple's Mail with the exact same accounts set up as the Windows machine. I hit the "sync" button on Outlook. I hit the "sync" button on Mail. And the winner is... the slower computer. By a landslide. Yes, Apple Mail on a MacBook blows Outlook 2010 out of the water in terms of responsiveness, displaying the count of new messages one by one in its left-side inbox tray long before Outlook 2010 finishes its clumsy sync cycle and starts deciding to display the counts.

Here's a clue: responsiveness does *not* necessarily require faster algorithms. Mail isn't doing anything extra-special compared to Outlook. Mail simply does more in parallel and returns results to the user as they come in, rather than batching them up. Responsiveness matters. Caching values so that you don't have to fetch them every time from the server, displaying results as they come in, and otherwise making it look fast is, in many cases, as good as being fast. I'm not privy to the innards of Outlook, but it appears to me that Outlook performs its sync cycle, tediously gathering new emails from each server, and then -- and only then -- decides to update counts and headers in the email window. I may be wrong about that, but I shouldn't even be guessing in the first place, because, like with Apple Mail, it should just happen.

And finally, let's look at Word 2010. Please. Word 2010 goes full-ribbon. Microsoft's new "ribbon interface" concept is a tabbed icon tray. It suffers from the same problem as Apple's Dock -- icons simply are not descriptive. They're so much hieroglyphics to me. Thing is, Apple's Dock is just one row of icons, and even that is confusing enough that I sometimes hit Preferences when I was aiming for System Performance. Word 2010, on the other hand, is tabbed row after tabbed row of icons. The end result is that it's utterly unusable to me because I can't find anything, any functionality I want is hidden in row after row of hieroglyphics. I instead go into OpenOffice and create my documents there. Way to drive people to your competition, Microsoft!

There's two points here:

  1. Icons do *not* replace text. Don't even think about it. Icons are a *supplement* to text.
  2. Microsoft doesn't follow their own user interface guidelines, adding gratuitous complexity for the sake of complexity, hiding functionality in row after row of "ribbon" icons to the point of rendering their product basically unusable to anybody who doesn't live in it. Look at page 19 of their user interface document (linked to above). Look at Office 2010. Sigh and think of what could have been, if Microsoft had only followed their own guidelines.
Note that Windows 7 itself, while not an outstanding user interface, is understandable and usable. I might laugh at how you must use the search box to find anything in the Control Panel because there's so many of the freakin' things, but there is a logic to it that is easy to understand. Office 2010, on the other hand, has a logic to it... but one that reminds me of the story I presented here a few months back, about the physicist who made a logical interface for our product -- but an interface that only made sense if you were a physicist. Or as he exclaimed when we told him that our users wouldn't understand his UI, "then your users are idiots!" Why... yes! And they're idiots with *money*, who want to get a job done, and who are willing to *pay* us if we give them something that they can use to get the job done that doesn't require them to be nuclear physicists to use it!

Sadly, Microsoft's been stuck in their own hermetic world for so long with no strong leadership from the top to force all the various divisions to comply with things like, say, user interface standards, that there is just no consistency to their product line. Even Windows 7, probably the best Microsoft user interface since Windows 95 introduced their "new" user interface to the world, suffers from this syndrome a bit -- the various vintages of programs in their control panel, for example, have user interfaces that are all over the place. It's a shame, really, because Microsoft has the technology to do it right, and even the people like those who participated in writing their user interface design document. Just not the leadership.

I guess Microsoft chairman Uncle Fester figures that as long as the majority of people need to use his workstation OS because it's the standard, he really just needs to sit back and rake in the money with occasional gratuitously incompatible upgrades to force people to buy replacements for their old stuff. And he may be right. But unless your company is Microsoft, it behooves you to do as Microsoft says -- not as they do. Because your competition is going to come out with a clean, simple, easy to understand user interface for their product... and if your product looks like a mess, if it exposes technical details that customers don't want to know about rather than Just Working, well.


Wednesday, January 26, 2011

Denial is more than a river in Egypt

A Linux fanboy asks, "Why do so few people use Linux on the desktop? After all, it's superior to Windows."

My response is: Are you joking? Surely you're joking, right?

First of all, people don't use operating systems. They use applications. And most end-user applications run on Windows. If, for example, I want to manage a ESXi server, I have to use vSphere Client to do that. And vSphere Client runs on... err... Windows. As does pretty much every other specialty application on the planet that people use, or even many non-specialty ones -- try viewing WMC or QuickTime videos on Linux. You can't do it. They simply don't work. You can (illegally) hack your Linux system to do this by copying components from Windows, but really, how many end users are going to do this? And they certainly aren't going to do it in a corporate environment, where systems are locked down to prevent end users from installing illegal software.

Secondly, as I've repeatedly pointed out, the Linux desktop environment is a mess. It's like if the Soviet Union had not fallen in the early 90's, had seen Windows 95, and decided to create a Soviet version of Windows 95. It's clunky, creaky, overly complex, makes little sense from an end user perspective, and things that are easy on the Mac or Windows are ridiculously difficult to do with the Linux desktop. For example, to assign one of the side mouse buttons to the window switcher on Windows or Mac is a simple tool in the Mac control panel or Logitech mouse manager on Windows. To do so on Linux, on the other hand, is an adventure.

So anyhow, what inspired this rant? Well, simple: I was annoyed at the slow speed of the QEMU console used by KVM and Xen. This appears to be an issue with QEMU's console driver, it does the same thing with both KVM on Fedora and Xen on OpenSUSE. QEMU's console driver screen-scrapes video memory, then stuffs it into a VNC session, but does this so ridiculously slow as to render it basically unusable. I know it's possible to do virtualized console drivers that operate quickly -- VMware does it *over a network*, for cryin' out loud, install ESXi on one of your spare systems and point vSphere Client at it from a Windows system on your network if you disbelieve me -- but apparently nobody in the Xen or KVM communities cares, since this has been a problem for literally years. I guess it's because Xen and KVM are typically used to virtualize things like web and email servers, where nobody cares about how fast the console is.

The workaround is to a) use ssh if you need CLI access to the system, and b) spawn off a vnc session if you need GUI access to the system. For example, for my Fedora guest, I have this line in rc.local:

su -c "vncserver -geometry 1440x900 -depth 16" egreen > ~egreen/vnc-log 2>&1 &

From there on, I access it via a VNC viewer from my Macbook Pro or from the Linux host system.

[flame on]
This problem has been there since the beginning of the QEMU project, but nobody cares to fix it because, well, it's "good enough" to get in and start vncserver, so what's the problem? This disregard for end user experience is the #1 reason why Linux is a sad also-ran in the desktop operation system competition. Even netbooks have switched away from Linux to Windows, because the end-user experience with Linux is so pathetic that even Windows -- sad pathetic WINDOWS -- does it better. It's the whole XKCD 619 problem. It's not because Linux doesn't have the technical capability to have a decent end-user experience, it's because nobody *cares* because what exists is good enough for geeks and if you point out that end users don't like it, you get oodles of pushback from the Linux fanboys about how you don't *really* need feature X because there is workaround Y. Denial is more than a river in Egypt, folks. I've been using Linux for 15 years. My name is in the Linux source code. I've written at least half a million lines of userland code for Linux in the past 15 years and while my kernel contributions are minor driver contributions, they're there. And my desktop is a Mac because I find the Linux desktop environments to be so clumsy, clunky, and annoying. Q.E.D.
[/flame off]

So anyhow, that's my gripe of the day. I'm sure I'd get some pushback from Linux fanboys on it if they bothered reading it, which of course they won't, because they're too busy making sure Linux runs well on 4096-core processors. All I'll point out is that refusing to admit you have a problem is a guarantee that the problem will continue. The Linux community is like a drunk that refuses to admit he has a drinking problem. Linux has a user interface problem, people -- and like the drunk who won't go to rehab because he refuses to admit he has a problem, Linux's user interface problem is not going to get fixed as long as Linux geeks continue to insist they have no problem.

-- ELG