Saturday, November 7, 2009

ACL management on MacOS Snow Leopard

So I was going to follow the directions at this hint site to prevent Time Machine from doing a full backup again once I updated my MacBook Pro to a bigger drive. After all, I don't want to re-backup the stuff I just restored from my backup! But my attempt slammed to a halt after I type 'fsaclctl' and... uhm... WTF? It isn't in Snow Leopard! And by the time you get to userland the permission to override a "Deny All to All" ACL is dropped even if you su to root... you just can't get there from here unless you can somehow turn off ACL support for the whole filesystem!

Ah, but never fear, the Leopard version of fsaclctl works just fine on Snow Leopard. The question is, which of my half dozen backup drives up in the storage closet or offsite is old enough to have Leopard on it? I was about to get up and go grab one, when I glanced down and... there was the Mac OS Leopard 10.5.2 install DVD, right there, in the pile of disks I'd used to re-image the Mac.

So first thing to do was drill down and find the package. The packages live in '/Volumes/Mac OS X Install DVD/System/Installation/Packages' and the easiest thing to do is 'go to folder' from the Finder 'Go' menu to go there. Then by dragging dropping packages onto the /Developer/Applications/Utilities/PackageMaker utility, I discovered that fsaclctl lives in package "BSD.pkg" in directory /usr/sbin.

The next question is, how do we get the file out of the package? I couldn't drag it out of PackageMaker, PackageMaker simply refused to do so. So I grabbed a utility called 'Pacifist'. I won't claim it's the best utility for doing this because it's simply the first one that came up when I googled, but it allowed me to drop the BSD.pkg onto it, drill down to the file, then drag the file out to a folder on my desktop, from whence I could then put it into ~/bin and use it.

Now, this isn't about the Time Machine hack (BTW, it didn't work -- apparently Time Machine's implementation has changed since Leopard), but, rather, about security. Some folks wonder why MacOS is more secure than Windows. This experience gives you one clue why. There are things you cannot override even if you have full administrative access, once permissions are dropped during the boot process. I suspect that in future releases of Snow Leopard will remove the low-level ioctl that fsaclctl relies on, further securing the system. But it's clear that while Apple doesn't make splashy announcements about security and doesn't have some of the bells and whistles like address space randomization, they're doing some things quite right in the background to keep things secure.

-EG

Friday, November 6, 2009

Parallels 5 vs. VMware Fusion 3

So I have tried both of these virtualization solutions for MacOS Snow Leopard and the winner is... VMware by a landslide. Not because of performance. VMware's performance is acceptable for my purposes but I can definitely tell that I'm running in a virtualized environment. But, rather, because VMware WORKS, and Parallels doesn't. That's the bottom line. I can go into more detail, but I'm just too frustrated with Parallels right now and would use language not appropriate for polite conversation. Having Parallels crash my computer *TWICE*, and lock up three different times, simply does not make me happy.

I am saddened to say this, because I've owned Parallels since version 2.0, but this is it. This is the end. They are not getting any more money from me. Each new release of Parallels they promise that they got it right this time. Each time, they break things badly -- for example, in Parallels 4, one of my mapping programs ended up going BLAMMO unless I turned off mouse pointer acceleration in the Windows control panel, and then the Parallels device driver simply refused to display any mouse pointer at all. Meanwhile VMware Fusion 3 is a rock solid product. It might be slightly slower than Parallels on some benchmarks (hard to tell, I could never keep Parallels running long enough to run the benchmarks I was wanting to run), but it *works*, and the integration between Windows and MacOS Snow Leopard is quite good, no problems with cut-and-paste or sharing files between Windows and MacOS or anything like that. The competition between VMware and Parallels is over, and Parallels is done. Finished. Kaput. They had first mover advantage, and like Netscape with web browsers, simply failed to execute.

Which reminds me of the time that my manager was the guy who ran Netscape's development process into the dirt. Needless to say the common Linux fanboy notion that Microsoft ran Netscape out of business is utter nonsense -- Netscape's browser technology disintegrated without any help from Microsoft at all. Their technology simply disintegrated under the weight of too many idiotic false deadlines and hacks, and the manager who did that then did the same thing for my then-employer's development process. But that's another ugly tale that tends to evoke unwise language so I'll do something a bit more abstract about deadlines and why they're both useful and, in some cases, toxic.

-EG

Numbers from Windows Experience quickie benchmark:

  • VMware 3:
    • Processor: 5.9
    • Memory: 3.9
    • Graphics: 2.9
    • Gaming graphics: 3.4
    • Primary hard disk: 6.3
  • Parallels 5:
    • Processor: 4.5
    • Memory: 3.9
    • Graphics: 2.9
    • Gaming graphics: 4.1
    • Primary hard disk: 5.9
Parallels has somewhat better 3D performance, somewhat poorer performance on processor and hard drive tests, same as VMware elsewhere. Parallels is probably better if you want to play games, but that's why Boot Camp was invented...

Monday, November 2, 2009

The Windows 7 'reg' command

So I had a problem. I had a Topo 8 install on my old XP hard drive and wanted to transfer it to my new Windows 7 machine. No problem, just re-install, right? Well, that would be a problem alright, because my activation key for Delorme Netlink would not work in the new install -- Delorme links it to a single installation of Topo USA. Note that the licensing for Netlink allows me to run it on any computer that I own as long as it's just one computer at a time (i.e. I can't have it installed on more than two computers and can only use it on one computer at a time), but the actual implementation is similar to the lamentable Windows Activation in that it often disallows things that are allowed under your licensing agreement, requiring you to call in to a support center and have a database entry adjusted at the other end to allow activation.

So now let's talk about what actually happens on modern versions of Windows when you install a program. Things get placed into basically four areas:

  1. Start Menu folder -- usually a folder is created here with a new Shortcut to the application plus utilities. The location of the Start Menu folder differs wildly between Windows XP and Windows 7, but it's easy to find.
  2. Program Files -- usually a folder with the program and all its data is created here.
  3. Windows -- Any driver bundles are plopped into the appropriate folders here, as is installer/uninstaller info.
  4. Registry -- Configuration data and component registration.
Of these, the first three are easy to copy from one computer to another. But the registry entries... ah yes, now that is a problem!

The fundamental problem is that the registry is a database, and thus you can't simply use drag and drop to move entries from point A to point B, unlike with MacOS where you could just copy the appropriate directory from the old /System/Library/xxx to the new /System/Library/xxx and/or the old ~/Library/xxx to the new ~/Library/xxx to move the configuration data, or Linux where you could just copy the appropriate directory from the old /etc/xxx to the new /etc/xxx to move the configuration data. You have to use database tools, and the Windows database tools for accessing the registry are crude and primitive compared to the tools available for accessing file data. This is especially true for the 'regedit' GUI command which is utterly incapable of copying registry from place A in the registry tree to place B in the registry tree. But never fear: This is a capability that the command line 'reg' command has, and we're going to use it.

The first thing to do is to mount the old hard drive as your "D:" drive. Make sure you've added the 'Run' menu option to your Start Menu with the appropriate control panel entry (sorry, you've already seen my opinion of the Windows control panel, it's in there *somewhere* but you'll have to do like me and just dig until you find it!). Select 'Run' from your start menu, and go into 'Regedit'.

The next thing you'll need to do after that is import the HKLM hive into your registry. Click on the HKEY_LOCAL_MACHINE entry and select File->Load Hive. Browse over to D:\Windows\System32\config and select 'software' as the hive to import. Then give it a name, like OLD_SOFTWARE. Once you finish doing this, you'll find that OLD_SOFTWARE is now in your registry tree. You can now exit regedit, because regedit has no (zero) ability to copy subtrees from one place to another in your registry tree.

Now you'll need an administrative mode command prompt in order to operate. Now, I'm going to assume you have some basic Unix-compatible command line tools available using Cygwin or by copying files to MacOS or Linux via a network file share then executing Unix commands there, simply because there are no native Windows tools which will do the same command line parsing in as easy a manner, but it COULD be done with VBscript. It'd just be a lot more coding to make it work.

So: now that we have a command line,let's query out all the Delorme keys:

  • reg query HKLM\OLD_SOFTWARE /s /f delorme >\Delorme_keys
Then copy Delorme_keys someplace where you can run Unix commands on it:

  • grep "^HKEY" Delorme_keys >Delorme_keys2
  • vi Delorme_keys2
Take a look at those keys, and at the original file too, to see which ones you want. In general you will not want to completely replace the contents of all keys that have some data item related to your application, you'll want the Classes and any software-specific key. So I edited Delorme_keys2 to have the keys I wanted to copy from the old install, then:

  • awk ' { t=$0; sub("OLD_","",$0) ; printf("reg copy \"%s\" \"%s\" /s\n", t,$0); } ' Delorme_keys2 >DelormeRegCopy.bat
This gives me a file that has lines in it that look like this:
  • reg copy "HKEY_LOCAL_MACHINE\OLD_SOFTWARE\Classes\CLSID\{20016EDD-4CB6-11D3-A3FA-0000C0506658}" "HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{20016EDD-4CB6-11D3-A3FA-0000C0506658}" /s
The 'reg copy' command will copy both the key (and its data items) and any subkeys to the new location, assuming you provide the /s flag. Then I copy this .bat file back to the Windows 7 system, and from the Command prompt type "delormeregcopy.bat" and voila! Now my copy of the Delorme application works, Netlink works, and I can then re-install Topo 8 on top of this install to "repair" it (i.e., put the installer and driver bundles in the right place and verify that everything is registered properly) and the installation keys will still be there to keep my Netlink operational.

Note: As always, back up your registry before mucking around in it. And be *very* careful with any keys you copy in, I examined the contents of each key using regedit before I allowed it to stay in my final Delorme_keys2 file. The above is NOT the full directions for how to do this specific task, simply because I do not wish to enable software piracy, but, rather, an example of how to use the 'reg' command to copy critical registry entries from an old installation into a new installation. And the usual disclaimer "this might destroy your system!" applies.

Now: I could go off on a rant about how stupid the Windows registry is, how the tools to manipulate it are primitive and far inferior to the tools available to manipulate text files, blah blah blah, but we already all know about that. The "reg" command at least gives us some of the missing functionality that regedit doesn't have, even if it requires typing obscure commands at a command prompt. But then, "obscure" and "Windows Registry" do go together like "macaroni" and "cheese", eh?

-- EG

Sunday, November 1, 2009

The Windows 7 hoopla

So is Windows 7 a Mac killer? Or is Windows 7 lipstick on a pig? The answer is "No."

Let's look at the first one first. Windows 95 in many ways introduced "the" Windows user experience. It was a clean, reasonably logical user interface that was surprisingly good from a user interface perspective considering the limitations of the underlying platform, limitations which were necessitated by the limitations of the underlying hardware and the need for DOS compatibility until Windows-specific software arrived. It was Windows 95 that I evaluated, then went to my boss and said, "This is going to be big. We need to figure out some way to make money with it." That was a few months before a customer brought Linux to our attention (and my reaction to that later -- it was not favorable, initially), but certainly I wasn't wrong when I said that to my boss.

It's been all downhill since from a user interface standpoint, with each new release of Windows having yet more useless folderol to waste resources and confuse customers but no fundamental change in the UI. Windows 7 continues that tradition, adding lipstick to the pig that has become Microsoft's overly complex user interface by re-naming some things, changing text to icons on the menu bar, and somehow managing to make the Control Panel even more complex than it already was. People who claim Windows 7 could somehow be a "Mac Killer" are being ridiculous. Changing the text on the menu bar to icons does not make it a dock, and Windows 7 is even more confusing to set up and configure than its predecessors were if you're trying to integrate it into an already-existing network. I clicked away in the control panel for quite some time before finally typing "change workgroup" into the search bar. That took me to a place where I could change the workgroup (so it matched my home and office workgroup name so my systems would appear in the network browser), but where is that located in the morass that is the Windows 7 control panel? I have absolutely no idea, I clicked into the logical place and it changed my workgroup to "WORKGROUP", which isn't what I wanted at all.

Meantime, click on the open-apple icon and select 'System Preferences'. There's two possible places where you could set the workgroup -- 'Sharing', or 'Network'. I clicked on 'Sharing' and didn't find it, so I clicked on 'Network', there's a button 'Advanced', I clicked on 'Advanced', saw the word 'WINS', and yep, there's my NetBIOS name and workgroup name. Three clicks once I got the Mac "control panel" up - Network, Advanced, WINS -- to get me where I needed to be.

So from a user interface perspective, Windows 7 definitely is lipstick on a pig. It's just a bunch of lipstick on top of the original Windows 95 user interface, and like a toddler messing with mommy's lipsticks, the results are not all that great from a usability perspective. Frankly, I prefer the original, which was fast, clean, useful. However, that's not the important changes that have been made to Windows 7. The important changes are under the hood. Windows 7, in my test, used approximately 3GB more disk space than Windows XP -- i.e., around 8GB rather than 5GB. Its memory usage for snappy performance is approximately 256MB more than Windows XP (around 756M vs. 512M) if you disable Aero by switching to a 'Basic' theme, and since Aero is just lipstick, that's no big deal. In exchange you get a more secure operating system that has built-in functionality that Windows XP lacks, such as the ability to record a DVD. I have not tested Windows 7 on a netbook yet, but I'm not seeing any reason why it wouldn't work -- even with Microsoft Office installed and various third-party Internet software (Firefox, Safari, Flash, etc.) I'm using only 14GB of disk space for my Windows 7 system, and even low-end netbooks come with 32GB SSD drives and 1GB of memory today.

So from that perspective, Windows 7 accomplishes what Microsoft wanted it to do -- it allows them to discontinue support for Windows XP because it will run pretty much everywhere that XP is currently required due to the resource usage of Vista. It also accomplishes what most IT people want -- a more secure operating system that won't require them to spend half their time cleaning up after virus outbreaks, and which allows them to standardize on *one* operating system, rather than having a mismash of various versions of Windows. On the other hand, it's pretty clear that Microsoft needs more than lipstick on a pig to clean up their user interface. They need a few iFools to lead the charge against useless UI complexity, including at least one iFool who has the status in the corporation to push back against the marketing droids and geeks who always want one...more...feature... to never be used by actual customers, but look good on a marketing flyer or looks, like, really rad, dude. I wish them luck, because after fifteen years of putting lipstick on a pig, there's almost more lipstick than pig insofar as the Windows UI is concerned.

-- EG

Wednesday, October 28, 2009

The smartphone maze

Much has been made of recent improvements in Google Android phone sales. Android phones are now available (or will be available by November 1) on all major U.S. carriers except AT&T, and many carriers will have multiple Android phones. There are some who say that this will doom the Palm Pre, which along with the iPhone has the slickest user interface of all the various smartphones out there. But my own analysis is that this isn't so: The smartphone OS that Android is supplanting is not RIM's or Palm's, but, rather, Windows Mobile.

It is little secret that the development of the next generation of Windows Mobile is a disaster. Windows Mobile 6.5 has been announced for the end of this year to collective yawns -- nobody thinks anybody is going to actually ship a phone based on it. Windows Mobile 7 has been announced for next year to barely concealed guffaws. Nobody who is serious expects a viable Windows Mobile 7 to come out anytime before the end of next year. What has happened, during this era of stagnation in Windows Mobile, is that WM vendors are now migrating to Android for their new smartphones. Android supports the new features of the new smartphone hardware, while Windows Mobile doesn't. And while Android is a user interface disaster, so is Windows Mobile -- both systems embody pre-iPhone paradigms of how to do things where each application has its own unique user interface, as vs. the new multi-touch common-user-interface paradigm where all user interface coding must go through a library that enforces a common look and feel. In short, where geeks used to go to WM because it was a (relatively) open platform with a lot of capabilities such as multi-tasking that the competition did not have, now they're going to Android instead because it has those same attributes but supports newer hardware.

So what seems to be falling out of all this is that Windows Mobile is going to go the way of old-school PalmOS shortly. The current vendors of WM phones such as HTC appear to be engaged in a mass migration to Android. But this does not mean that sales of the iPhone and Palm Pre will be hurt by Android. They are simply different markets -- Android, due to its fundamental design and development processes, will simply never be able to match Apple or Palm on ease of use or consistency of user interface between applications. Like Windows Mobile, Android is a geek product. Plenty of geeks will likely end up migrating to Android, but there is a huge market for smartphones as people max out the capabilities of standard candybar/flip phones between Twittering and everything else they want to do with phones, and most of these people are not geeks. Vendors like Apple and Palm are well positioned to go after that market... but Android simply doesn't play there, any more than RIM does with their crackberries.

-E

Monday, October 26, 2009

People are NOT fungible

One of the things that happened during the transformation from being "employees" to being "human resources" is that large corporations apparently decided that employees are fungible. That is, if you have two employees, employee A and employee B, and employee A is making a lot more money than employee B, it's fine to just drive off employee A and replace his position with employee B then hire a contractor for even less to fill employee B's position. Hey, an employee is an employee, right? Interchangeable, just like cogs, eh?

Much has been said about Microsoft's T-Mobile Sidekick disaster and what that says about the notion of "cloud computing" (hint: As I said earlier, cloud computing does not eliminate normal IT tasks other than actual hardware maintenance). But it says even more about the whole concept of "human resources". The infrastructure that Microsoft purchased with Danger included Oracle databases, Sun servers, and a set of non-Microsft NAS or SAN systems. None of these are things that Microsoft has experience with. The current hypothesis is that Microsoft contracted a contractor to do an Oracle database upgrade, and the contractor did exactly that, and Oracle -- as it often does -- ate its database during the upgrade. This was compounded by, apparently, the database backups being unreadable by the new version of the database. All of this is remediable if you have sufficient Oracle expertise on staff, but apparently neither Microsoft nor their contractor had such expertise -- they'd all left Danger after the acquisition after being shifted to positions dealing with other technologies that they didn't like or did not have the skills to do successfully.

Lesson for managers: Identify the critical skills that you need in order to continue to have a viable business, and retain those people. It's a lot easier to retain the people you need, than to find new people with those same skills to replace them once they do leave and you discover that suddenly you no longer have a viable enterprise because critical tasks are no longer being done for lack of expertise to do them. Employees are not fungible. You simply cannot replace an Oracle database expert with a contractor hired off the streets or with an expert in Microsoft databases, Oracle databases are black magic and the people who can successfully maintain them are worth every penny you pay them.

Of course, it's easy to throw stones at Microsoft here, but it's not a Microsoft problem. It's an industry-wide problem. Managers industry-wide are failing at the task of properly identifying the skills they need in order to perform the tasks needed to have a viable business, and when the people with critical skills leave are blind-sided. Thus you get disasters like Sprint's Nextel disaster, or this Sidekick disaster, where critical infrastructure people left and the infrastructure fell apart and rendered the enterprise non-viable. Employees are not fungible, and if you fail in your job of identifying the skills needed to keep your business operating and retaining the people with those skills, you may not get the press of the Sidekick disaster, but your business will operate slower, less efficiently, and have difficulty getting product out the door. And especially look at IT and operations people. That's not sexy stuff, but both Sprint/Nextel and Microsoft/Danger show that you simply cannot fire all the operations people you just acquired and replace them with your experienced employees who are experienced with a totally incompatible technology. It doesn't work. It just doesn't. And remedying the disaster that arises after you do this will be far, far more expensive than just retaining those critical infrastructure people in the first place.

-E

Monday, October 12, 2009

SSD in a low-end netbook

Netbooks tend to live a hard life. They're used in moving cars, they spend a lot of time banging around in backpacks, and so forth. Early netbooks like the Asus eee that practically defined the category used Linux and a small flash memory chip. This dealt quite well with the problem of durability -- flash memory chips don't care about vibration (at least, not about levels of vibration that wouldn't utterly disintegrate the whole computer) and The problem is that people want to use their netbooks to view multimedia content, and Linux is woefully inadequate in that area due to the fact that Linux users today are either utter geeks (parodied in this Xkcd comic) or are using it for servers where multimedia is not an issue (other than serving it via a web server). So netbooks have moved to using Windows XP rather than Linux.

The problem is that Windows XP does not run well off of the slow flash memory chips included with first-generation netbooks, thus netbooks have moved towards the cheapest hard drives available. Unfortunately this brings two problems: 1) those hard drives are still painfully slow compared to current state-of-the-art hard drives, and 2) those hard drives have the same vibration sensitivity and G sensitivity of all hard drives, making them a poor fit for netbooks.

The solution would be a high-speed SSD drive. They perform much better than low-end hard drives, and the only vibration or G forces that could destroy them would turn the entire netbook into a pile of shards. The problem is that SSD's were typically expensive. Until now: 64GB SSD for $150, in this case a Kingston SSDNow V-Series.

64GB doesn't sound like a lot of storage, but I examined the hard drive on my Acer Aspire One netbook and discovered that I was using a whole 20GB of hard drive space. I think my usage of the netbook is probably typical of most people's usage of a netbook -- Internet browsing and light word processing. These aren't computers that you buy to do video processing or music recording, they don't have the CPU horsepower for that, but they're perfectly acceptable for Internet browsing. When I'm bouncing around in my Jeep on field expeditions I don't want to haul around my expensive Macbook Pro, I want something small and durable for doing quick email checks whenever I get near civilization, and the Aspire One suffices for that. Except for the hard drive issue.

Thus I purchased the above SSD and installed it in my Aspire One. I had previously purchased the disk imaging CD/DVD set from Acer to allow re-imaging my netbook when the hard drive failed (note the "when", not "if" -- netbooks live hard lives), and it installed fine onto the SSD. The results have been gratifying. Performance is much better than with the low-end hard drive, and the durability is excellent. The second-generation SSD's have now conquered the stuttering problems that plagued the first-generation SSD's, at least for applications such as netbooks where large writes are rare -- I have never encountered stuttering problems.

What does this mean for the future? It means yet more low-power energy-efficient netbooks, perhaps higher in price than current netbooks but with better durability and performance. Netbooks will be relegated to the long-battery-life small-storage-capacity category rather than being marketed based on low performance and low price. You will start seeing some netbooks in the $700 range, around the same as a "real" notebook, assuming that sufficient performance can be obtained to justify that price. The question is whether Intel will deliberately cripple their Pineview follow-on to the current Atom processors the way they currently cripple it by forcing netbook makers to use the antiquated high-power-use 945 chipset, which has atrocious graphics performance (i.e., cannot even play HD videos from YouTube without stuttering, which is a major problem given that many people buy these things to browse Internet multimedia content). If they do, expect rival chips from AMD and VIA to gain popularity, albeit not with major vendors due to Intel's anti-competitive behavior of charging vendors more for chips if computer vendors use a rival's chips for more than 5% of their shipping computers. Given that there are major markets where Intel's chips are the only available chips, this clearly is going to limit how many jump ship to AMD and VIA. But if Intel can't deliver the performance that people want, somebody will jump ship to AMD or VIA, even if it isn't Dell or HP...

-E

Monday, October 5, 2009

In the Cloud

Cloud computing. Ah, how the buzzwords love to flock. This is no different from the 1970's when it appeared that the future was going to be large timesharing services. You could deploy your applications in that "cloud" and have redundancy, automatic backups, and so forth without the time and trouble of maintaining your own infrastructure. If you needed more storage an additional DASD in your VM virtual machine could be easily deployed from the "cloud" of storage devices available at the timesharing service, if you needed more CPU your application could be deployed on a VM given access to more of a CPU or even to a whole CPU, and so on and so forth. Large time sharing services with IBM 370's and follow-ons were doing cloud computing before the word existed. There is no (zero) functional difference between an IBM 370 running VM in 1973 and a Dell server running VMware ESXi today, other than the fact that the Dell is much faster and has much larger hard drives of course. But both do the exact same task, and arguably the IBM 370 did it better since the IBM 370 would even let you migrate all processes off of a CPU and take that CPU offline and remove it entirely for service, *with no disruption to running jobs*. Something which, I might add, its descendant IBM mainframes are still capable of doing, and which VMware wishes it could do.

So what happened, you ask? Why did large corporations move to networked microcomputers and their own insourced mainframes, and why did smaller businesses move to microcomputers and servers? Part of the reason was one of data security -- having your data reside with a third party entity that might go out of business at any time was a business risk that was not acceptable. But also, they ran into the same problem that cloud computing runs into when you try to deploy large enterprise databases into the cloud: a lack of I/O bandwidth. We are talking about an era where 300 baud acoustic couplers were high tech, remember, and where the backbones of the primitive data networks ran at 56kbit and operated in block mode. As a result, user interfaces were crude and based around block transfers of screen data, since real-time updates of screen contents based on immediate response to keystrokes was simply impossible. When microcomputers were invented with their megahertz-speed connections to their video displays and keyboards, that made possible entire classes of applications that were simply impossible on the prior timesharing systems, such as spreadsheets and real WSIWYG text editing and word processing.

Pipes to cloud computing facilities today are similarly constrained compared to local pipes. 10 gigabit network backbones are now not unusual at major corporations, yet most ISP connections are going to be DS3's that are operating at 45 megabits per second. It is clear that cloud computing runs into the same communications problem that prior time-sharing operations ran into, except in reverse -- where the problem with the time sharing systems was inadequate bandwidth to the user interface, the problem with cloud computing is inadequate bandwidth to the database. Most major corporations generate gigabytes of data every day. One major producer of graphics cards, for example, has so many NetApp appliances filled with simulation data for their cards that they had to expand their data center twice in the past five years. This is not a problem for a 10 gigabit backbone, but you are not going to move that data into the cloud, you're hard pressed to save it to local servers.

So what makes sense to deploy to the cloud? Well, primarily applications that are Internet-centric and operate upon a limited set of data. A web site for a book store that occasionally makes a query to a back end database server to get current inventory works fine for cloud computing. Presumably the cloud servers are colocated at critical points in the Internet infrastructure so that buyers from around the world can reach your book store and order at any given time, and the data requirements to the back end are modest and, because much of the content is static (War and Peace is always going to have the same ISBN and description for example), much of the data can be cached in those same data centers to reduce bandwidth to the central inventory system. I can imagine that this bookstore might even decide to sell access to their own internally developed system for managing this "cloud" of web application servers to third parties (hmm, I wonder who this bookstore could be? :-). Another possible application is for "bursting" -- where you need to serve a significant number of web pages for only a small part of the year. The Elections Results web site, for example, only gets hammered maybe six times per year, and gets *really* hammered only once every four years (when the Presidential race hits town). It serves a limited amount of data to the general public that is easy to push to data centers and serve from local caches there, and maintaining huge infrastructure that will be used only once every four years makes no sense from a financial point of view. Cloud computing makes a lot of sense there.

But one thing to remember about cloud computing: Even for those applications where it does make sense, it is no panacea. Yes, it removes the necessity to purchase actual hardware servers and find a location for them either in your own data center or in a colo, and provide plumbing for them. But you still have all the OS and software management problems that you have if the servers were local. You still need to deploy an OS and manage it, you still need to deploy software and manage it, you have simplified your life only in that you no longer need to worry about hardware.

At the recent Cloudworld trade show, one of the roundtables made the observation that "the next stage in cloud computing is going to be simplifying deployment of applications into the cloud." I.e., previously it had been about actually creating the cloud infrastructure. Now we have to figure out how to get applications out there and manage them. And past that point I cannot go :-).

-EG

Saturday, September 26, 2009

The product cycle does not end at the doors of QA

Most engineers, I've found, have a very limited view of the product cycle. They get a spec from product marketing. They implement this spec. They hand the product off to QA. The product gets shipped to customers after bugs are fixed. They're done.

The problem is that this limited view of the product cycle utterly ignores the most important question of all for customers, the question that causes the most pain and headache for customers: How will this product be distributed and deployed in the enterprise?

Product marketing's spec doesn't answer this question, usually. For that matter, the customer often has no idea. The end result is that you end up with atrocities like the Windows install cycle, where deploying Windows and Windows applications across your enterprise requires massive manpower and huge amounts of time and trouble.

When you're working on your product's detailed functional specification and architecture, you must be also thinking about how to automate its deployment. You're the one who knows how the technology will work internally. So let's look at the evolution of deployment...

My first contract job after leaving teaching was a school discipline tracking program, necessary in order to meet new federal requirements for tracking disciplinary offenses. Once I had the actual program working, the next question I had was, "how will this be deployed?" I knew this company had over a dozen clients scattered all over the state, each of which had at least five schools. There was no way that we were going to deploy it by hand to all of these places. The program also required a new database table inserted into the database, so you couldn't just place the program into some location and have it work. And the program also required a modification of the main menu file to add the new program to the list of programs. So there was at least three files involved here. My solution, given we had to get this thing deployed rapidly, was to write a shell script (this was on Xenix back in the day), tar it all up, and then give short directions:

  • Place distribution floppy in drive
  • Go to Unix shell prompt by logging in as user 'secret' then selecting menu option 2/3/4 .
  • Type:
    • cd /
    • tar xvf /dev/floppy
    • sh /tmp/install-program.sh
  • Exit all logged-in screens, and re-login.
This worked, but still required us to swiftly ship floppy disks to the central office technology coordinators of all these school districts, and required them to physically go to each school and perform these operations. So the next thing I did was look at the technology available for modem communications on Unix, and decide, "you know, we could do all this with uux!" The modern equivalent would be 'ssh', but this was years before ssh existed.

By the time I left that company, several years later, I could sit at my desk and press one key and send a software update to every school district in our state, which the school technology coordinator could then review and trial at the central office, then, once it was approved, himself (or herself) push one key and send that software to every school in his district. This was all being done via modems and UUCP, since this predated Internet access for schools, but because this was also the era of green screen dumb terminals where 64K-byte programs were large programs, 2400 baud modems were plenty to do the job. We had arrived at a system of deployment that used the minimum manpower possible to deploy this software across a geographically dispersed enterprise. Because of the swiftly changing requirements of state and federal regulators (who would often require updates several times during the course of the year as they decided to collect new chunks of data), this system gave us a considerable cost advantage in the marketplace compared to our competitors, who still required technicians to physically go to each school and install software updates by hand.

Now, this was a specific environment with specific needs. But you should still go into any project aimed at the enterprise with a specific goal to make it as easy as possible to deploy into the enterprise. The customer should have to enter as little data as possible into the program to make it function. It should Just Work, for the most part. And lest you say, "but my program requires complex configurations in order to make it work!", you should investigate and make sure that's true. It was thought to be true about enterprise tape backup software, for example -- that setting up enterprise tape backup software required huge amounts of technical expertise in order to configure SCSI tape jukeboxes. It was the insight of my VP of Engineering at that time, however, that all the information we needed in order to configure the entire SCSI tape subsystem was already exported either by Linux or by the SCSI devices themselves. We told customers to place one tape into each jukebox, then press the ENTER key. They pressed the ENTER key and my scanning routine went out and did the work for them. What was a half-day affair with competitors became a single button press.

The point is that a) you must think about how to distribute software updates to multiple enterprises and across enterprises with the minimum necessory human intevention, and b) even the initial deployment of seemingly complex products can become easy to deploy once you look at what technology is involved and figure out ways to automate the configuration. But you need to be thinking about deployment -- how is this going to be deployed into the enterprise -- or it's not going to happen. Instead what happens is the typical product on the market today which is expensive to deploy across the enterprise and thus it doesn't happen, or happens only haphazardly. And being typical, in today's day and age, is hardly a way towards success...

-E

Monday, September 21, 2009

iFool

My main computing platform, the one I use for all my software development (via VMware which lets me develop for a half dozen variants of Linux), is a top-of-the-line Apple MacBook Pro 13.3". My phone is an iPhone 3G 16GB. Am I an iFool? Have I drank the kool-aid? Shouldn't a hardcore Linux penguin be using an Android phone and running Linux on his laptop?

Well, folks, that would be fine and dandy except for one thing: I want to get things done, I don't want to spend all my time trying to get Linux drivers working on my work laptop. And for my phone, I want it to seamlessly sync music with my laptop, I want it to be able to be plugged into the iPod input of my car stereo and just work, I want it to seamlessly sync the address book and bookmarks and notes from my laptop without having to do anything special. In short, I want it to Just Work.

So for my laptop I bought a MacBook several years ago. It Just Worked. And was Unix to boot, so all my normal tools were available from the command line, and GUI versions of them (like Emacs, The Gimp, etc.) just worked. The entire developer toolkit came for free with the computer, so I could even type './configure ; make' in my project and compile it under MacOS if I wanted to do so. I chose the 13.3" form factor because it fits on an airline tray, where bigger laptops won't. I've upgraded to the latest and greatest where it gives me a real advantage -- most recently to the new aluminum MacBook Pro with the 7-hour battery and Firewire 800 and nVidia graphics chipset -- and whenever I upgrade, the new MacBook sucks in my accumulated years of files, notes, photos, and videos without a problem. It Just Works, meaning I can do my job, instead of fiddle with the technology all day long.

For my phone, I used a Palm Treo running the Palm OS for many years after it was obsolete. Yes, the email and web clients sucked. But it synced my addresses, notes, ToDo lists, and calendars without a problem. The problem came when I was stuck in an airport waiting room needing to check on what was up with my itinerary. The hoary old Palm web browser just completely choked on the airline's web site. So what to do?

Now, I have an HTC Wizard that I used on T-Mobile some years ago. It runs Windows Mobile 5. WM5 will not sync with my Mac without extra-cost software, and my experience with WM5 was that it was technically astute, but had the user interface from hell. The Treo's user interface was simple, plain, and easy to use one-handed, the WM5 user interface really wanted three hands -- one to hold the phone sideways, and two to thumb on the thumb-board.

Then I looked at Android. And Android, alas, reminded me sorely of Windows Mobile 5. It is technically astute, but its user interface was similarly designed by geeks for geeks, rather than designed for simplicity and ease of use. As with WM5, getting anything done requires two hands, and the user interface is complex and, well, ugly.

So I arrived at the iPhone by default. It syncs seamlessly with my MacBook, and the user interface, while more complex than that of my old Treo, is fairly simple to use to do ordinary things. When I plunk it into its WindowSeat in my Jeep I can do the most common operations with one finger of one hand (mostly selecting an iTunes playlist since I'm using it for music while hooked into my car stereo at that point). No fumbling with a stylus, no poking at tiny indecipherable little pictures with the point of said stylus, everything is big and bold and easy to reach out and touch.

So what's the lessons here? First of all, if you're designing a user interface, complexity is the enemy. The iPhone was blasted for its simple -- some say simplistic -- user interface, much as PalmOS was blasted for its user interface. Yet both manage to make a virtue of simplicity to make their device much easier to use. Geeks love adding complexity to products, as do product managers who are looking to satisfy marketing checklists. It is your job as a software development manager to push back on the continual drive for user interface complexity. I once had one of my engineers give me a design proposal that was five web pages worth of highly technical stuff, all of which was useful to geeks but which would simply be gibberish for our user base. I sent it back to him with all five pages X'ed out and, on the launch page which led into that long series, I drew a selection box and a button beside it. The user selected the previously-downloaded configuration to upload, then the program just did what it was supposed to do. We probably missed out on some marketing checkboxes somewhere, but the end product was much more usable, because most users simply do not want, need, or care about all the technical details of what exactly is supposed to happen behind the scenes. They just want the computer to do the right thing. They want it to Just Work.

The second lesson is that lack of a marketing checkbox often means nothing in real life. The iPhone lacked cut-and-paste for the first two years of its life. This was a missing marketing checkbox, but it didn't hurt the iPhone's sales any. The iPhone became the best-selling smartphone in the USA despite not having a feature that everybody claimed was "necessary". The simplicity of use that Apple got from not having that feature was far more attractive to customers than the additional feature would have been, and when Apple finally developed a way to add cut-and-paste that would not impact the simplicity of the product, it was just icing on an already tasty cake as far as most iPhone customers were concerned.

The final lesson is that people just want to get work done. They want to get work done without having to fight the technology all the time, and without having to look at the internals of the technology to figure out what's wrong and how to fix it. I'm very good at debugging our products. When there's a defect that nobody else can figure out, I'm the guy who looks at it, goes and puts a few printf statements in the right place (source code debuggers are for wimps, heh!), says "Ah, I see," and then tells the appropriate programmer what went wrong and why and how to fix it. But while I'm doing that, I don't want to have to be debugging my laptop too. Both Windows and Linux force me to fix OS stuff all the time rather than actually do my work. MacOS just works. And that should be your product too -- the customer should install it, maybe type in a few setup options, and then it Just Works.

So am I an iFool? Well, yes. And you should be too. By which I do not mean that you should go out and buy an iPhone and a Mac (especially with the recent release of the Palm Pre, which appears to have learned some of the lessons of its predecessors), but, rather, that you should embrace the lessons that these products teach -- simplicity as a virtue, simplicity as not being a barrier to sales, and a product that just works, without a constant need to tweak or maintain it in order to keep it working. Do that, and your product has a chance to become the next iFool's purchase. Make it a typical overly complex difficult to manage product, on the other hand... well, then you're just another mess in a large marketplace full of messes, and will stand out about as well as a bowling pin at a bowling alley, just one more indistinguishable product in a marketplace full of indistinguishable products. Which isn't what you were setting out to do, right?

-E

Wednesday, September 16, 2009

So what about China?

Yet more paranoia about China and outsourcing engineering to China.

Pretty much every laptop computer you buy today is already made in China. That Macbook Pro that the paranoid executive won't take to China? Made in China.

There are some reasons to suspect corporate espionage, but the nationality of the perpetrator is irrelevant. I'm unclear about the exact motivations for the rampant China bashing that seems to happen in our media today, but the Chinese as a people have every motivation to not engage in dishonest dealing -- they need the West's money in order to continue modernizing their economy. China was a third world nation only 20 years ago, with an industrial base similar to that of most Western nations in the 1950's but 10 times more people to support with that industrial base. They've come a long ways in the past twenty years, but still have a lot further to go and they know it. Engaging in organized skulduggery (as vs. the ordinary disorganized industrial espionage that happens between business competitors) is not in their best interests and the least of our worries.

I've managed Chinese programmers working on security products. They're smart, but green. They still have a lot to learn about what it takes to get products through the whole product cycle from concept to final delivered product in the customer's hands, and they know it. Perhaps at some time in the future we'll need to worry about Chinese programmers inserting time bombs into security products, but today? Again, they have too much to lose, especially if their code is being regularly reviewed by senior American engineers, as was true in our case.

In short, you should definitely follow your normal procedures for detecting and closing security vulnerabilities, but singling out one nation -- China -- for special scrutiny is just plain silly. Yes, follow good practices -- don't leave your cell phone out and about, same deal with your laptop computer, make sure your firewall software is running, don't stick foreign media into your computer's ports or hard drives or install unsigned programs, if you've outsourced development have regular design and code reviews to catch security issues early, etc. But I have to think that all this emphasis upon one nation as a "threat" has more to do with politics than with technology, and distracts us from the real problems of securing our computers against real threats -- which are more likely to come from Eastern European virus writers than anything coming out of China.

-E

Monday, September 14, 2009

A new gadget

This is an HP OfficeJet Pro 8500. It is a fax machine, copy machine, scanner, photo printer, and just plain printer. It uses a water-resistant pigment-based ink so while it's not a laser printer, it fills the same basic need. It's fast, and the inks are priced reasonably on a per-page basis. So far, so good.

It replaces a laser printer, an inkjet printer, a scanner, and a fax machine, and frees up a huge amount of space in my home office even if it does sort of overlap the sides of my filing cabinet. And because it's network-connected to my Airport Extreme (via a network cable, not via WiFi), I can print wirelessly from my MacBook Pro while I'm at the dining room table or in the bedroom.

It just showed up in my Bonjour window when I went into Printer Preferences to add it as a printer. The scanner just showed up there too. Both work, and the document feeder works too for both copying and scanning, albeit I wish it had a duplexer. But for under $200, it's hard to be disappointed that the document feeder "only" gets one side of pages you feed it...

-Eric

Sunday, September 13, 2009

Language wars!

This one gets all the flames when development teams meet to decide on what language to use for the next project. The crusty old Unix guy in the corner says "C, it was good enough for Dennis and Ken, it's good enough for us." The Microsofty says "C++ of course. C is for Neanderthals." The J2E guy says, "Why does anybody want to use those antiquated languages full of stack smash attacks and buffer overflows anyhow? Write once, run everywhere!" And finally, the Python/Ruby guy says, "look, I can write a program all by my lonesome in two weeks that would take months for a whole team to write in Java, why is there even any question?"

And all of them are wrong, and here's why: They're talking about technologies, when they should be talking about the problem that needs solving and the business realities behind that.

I'll put myself pretty firmly in the Python/Ruby camp. My first Python product hit the market in 2000, and is still being sold today. My most recent project was also written primarily in Python. I also had a released product written largely in Ruby, albeit that was supposed to be only a proof-of-concept for the Java-based final version (but the proof of concept shipped as the real product, huh!). Still, none of these products are in Python or Ruby because of language wars, and indeed these products also included major portions written in "C" that did things like encryption, fast network file transfer, fast clustered network database transactions, and so forth. Rather, the language chosen was chosen because it was the right tool for the job. The first product was basically a dozen smaller projects written in "C" with a Python glue layer around them to provide a good user interface and database tracking of items. The second product, the one prototyped in Ruby, was prototyped in Ruby because Ruby and Java are quite similar in many conceptual ways (both allow only single inheritance, both have a similar network object execution model, etc.) and it made sense to prototype a quicky proof-of-concept in a language similar to what the final product would be written in. The last project was written in Python because Python provided easy-to-use XML and XML-RPC libraries that made the project much quicker to get to market, but also included major "C" components written as Unix programs.

So, how do you choose what language to use? Here's how NOT to choose:

  1. The latest fad is XYZ, so we will use XYZ.
  2. ABC programmers are cheap on the market, so we'll use ABC.
  3. DEF is the fastest language and is the native language of our OS, so we'll use DEF.
  4. I just read about this cool language GHI in an airline magazine...
Rather, common sense business logic needs to be used:
  1. We need to get a product out the door as quickly as possible to beat the competition to market.
  2. We need tools that support rapid development of object and component-oriented programs that are easy to modify according to what user feedback says future versions will need .
  3. Performance must be adequate on the most common platforms our customers will be using
  4. Whatever tools we use must allow a reasonable user experience.
The reality is that if you're trying to get a product to market as quickly as possible, you want to use the highest level language that'll support what you're trying to do. Don't write Python code when you can describe something in XML. Don't write Java code where Python will do the job. Don't write C++ code where Java will do the job. Don't write "C" code where C++ will do the job. Don't write assembly code where "C" will do the job. In short, try to push things to the highest level language possible, and don't be ashamed to mix code. I once worked on a product that had Ruby, Java, and "C" components according to what was needed for a particular part of the product set. There were places where Ruby lacked functionality and its performance was too poor to handle the job, for example, but Java would do the job just fine. And there was places where absolute performance was needed or where we were interfacing to low-level features of the Linux operating system where we went straight to "C", either as a network component accepting connections and data in a specified format, or as a JNI or Ruby equivalent.

The whole point is to get product out the door in a timely manner. If you decide, "I will write everything in 'C' because it is the native language of Unix and my product will be smaller and faster", you can get a product out the door... in three years, long after your competition's product hits the market and gains traction. At that point you'll be just an also-ran. What you have to do is get product out the door, get it out the door as quickly as possible with the necessary functionality and features and performance (not theoretical best, but "good enough"), and then work on getting traction in the marketplace. Perfection is the enemy of good enough, and seeking perfection often doesn't even produce a product that's any closer to perfection than the product originally written to be "good enough". That program that took three years to write? That company went bankrupt, and one reason was because the constant search for perfection ended up with a product that was inflexible, difficult to modify to work for different problem sets, and, frankly, poorly architected. Seeking the smallest memory footprint and theoretical best performance resulted in a product that failed in the marketplace because they missed the main reason we're writing programs in the first place: To meet customer needs. A program whose architecture is about memory footprint and performance at all costs is unlikely to have an architecture capable of being changed to meet changing customer needs, so not only were they late to market -- their product sucked. And the hilarious thing is, they didn't even manage to achieve the good performance their head architect claimed would happen with their architecture! So much for the three year "write everything in C" plan... late to market, poor performance, hard to modify, and they claim that those of us who advocate "good enough" rapid application development in the highest-level language feasible are "sloppy" and "advocating non-optimal practices"? Heh!

Next up... I talk about tools, and the odd story of how I got into the Linux industry in the first place. Note - I was the guy against using Linux in our shop. But that's a tale for another post :).

_E

Is the end of rotational storage near?

Okay, so this one's been predicted for over 20 years now. I remember reading Jerry Pournelle in Byte Magazine in the early 80's making this prediction. But I just went over to NewEgg.com and found a decent 64GB SSD for $150. While that's not going to replace the 2 terabyte RAID array in my Linux server, for the average consumer that's more than enough storage for anything they're ever going to do with a computer, and the 128GB SSD's are going to be just as cheap within six months, which is more storage than the average consumer will ever use.

Okay, now I hear you laughing. "64 gigabytes? Heck, my collection of unboxing videos is bigger than that!" But you're not the average person. The average person looks supiciously like me when I'm using my Acer Aspire One netbook on my Jeep expeditions rather than my top-of-the-line Macbook for development. My Aspire One gets used pretty much the same way the typical person uses their computer -- it does Internet browsing and email, I suck photos out of my camera into the Acer and resize them and post them on the Internet, and it handles a single major application -- in my case, loading detailed topographical maps and imagery into my DeLorme PN-40 GPS. It came with a 120gb hard drive. I'm using about 35GB of that hard drive right now, because I copy all my photos off onto an external drive when I get home (because I don't trust the integrity of a hard drive that's been bouncing around in a Jeep on an expedition into the Mojave Desert). Why wouldn't I put a 64GB "hard drive" into the thing that won't ever fail (at least, not because I bounce it around in a Jeep, anyhow)? After all, if I need more space, I can always plug in an external drive and copy stuff off. Frankly, 64GB is plenty of space for everything that the average person ever does with a personal computer, and 128GB is going to be $150 by the end of this year and most decidedly is enough space for the average consumer.

So maybe the end of rotational storage isn't here. But unless some new consumer application comes up that requires huge amounts of storage, it may be that we're seeing the last gasp of rotational storage in consumer hardware. After all, who needs a 500gb hard drive that might crash and fail, if a 128gb SSD costs the same amount of money and is plenty big for everything the average person wants to do with a computer?

-E

Thursday, September 10, 2009

Leadership vs. arrogance

At a recent meeting, we were talking about Carly Fiorina, what she did to HP that almost destroyed the company, and the sad state of leadership in corporate America today. "They're arrogant," someone said. "They had it good for so many years, and thought it was all their own doing." I disagreed. "Steve Jobs is arrogant. The problem is leadership, not arrogance."

To put it bluntly: Steve Jobs is an arrogant jerk. He has decided opinions about what his products should look like, and if you're one of his employees and going to go up against him, you better have a darn good reason for why what you're going to do is going to make the product easier to use for the majority of customers or add functionality that the majority of customers (not just a few geeks) need. You better have data, reasons, pretty pictures showing how it makes the product more consistent and easier to use, and so forth. If you can't do that, he'll tell you to go away and do it his way. When Steve came back to Apple he came back to a company in chaos, where multiple fiefdoms were feuding, where all attempts to replace the failing MacOS 9 were shot down as "too risky", where Apple's products were starting to look just like everybody else's beige boxes except running an OS far inferior to Windows NT or even Windows 95 on any technical basis. There was no shortage of talent. What there was, was a shortage of leadership -- someone who would take that talent, listen to what they had to say, process it, then say "This is what we're going to do, bang bang bang" and people walk out with their marching orders knowing exactly what they're going to do, why they're going to do it, and that failing to do it is not an option. It took, in effect, a real jerk being in charge who could rein in all these prima donnas and get them all rowing in the same direction.

Unfortunately, it's as if all of modern-day corporate America thinks being a jerk is how to be a leader. It's not. Rather, having decided opinions about what makes a product line the best in the world, opinions informed by listening to dozens of people then applying your own judgement, is what makes a leader. It's that whole vision thing, once again. That takes a certain degree of arrogance to push that vision onto a company because, let's face it, most of us in engineering are pretty arrogant ourselves. If we've been in the business for a number of years, we've seen companies come and go, we've seen products come and go, we've developed our own opinions about what makes a good product and what makes a bad one. But what a lot of managers and far too many CEO's get confused about is that they think being arrogant is leadership. It's not. It's a common product of leadership, but leadership is something else entirely -- call it vision. And given that many CEO's got their job by sucking up to the Board of Directors rather than having any vision of their own, they're followers by nature, not leaders. They're like the former CEO of DEC whose idea of "leadership" was to survey current owners of DEC VAX minicomputers asking them what they wanted as future products from DEC. Their answer was, of course, bigger and faster DEC VAX minicomputers. You'll notice that DEC is no longer in business -- because following is not leading whether you're following customers, following industry trends, or following conventional wisdom. And you can follow for only so long before you fall so far behind that you end up going down the tubes.

Yet these followers believe that, by being arrogant and expressing uninformed opinions that they refuse to change despite all data to the contrary, that this turns them into leaders. That is the problem with far too many companies today -- they are led by scared followers who are afraid that they're going to be found out as frauds, as not really being leaders, and who thus try to bluster their way into being seen as leaders via arrogance and inflexibility. But for a real manager, whether we're talking about a team lead or a CEO, all that manages to do is create an unmotivated, disillusioned work force that in the end is not going to create the innovations needed to move the company forward.

And that's the end of today's discussion of leadership. I'm going to return to this more in the future, because leadership is one of those things that is hard to quantify, but you know it when you see it. I'll further attempt to quantify it anyhow because, well, I'm just arrogant that way (heh), I always want to understand things and the best way to do that is to quantify it. And hopefully that effort might be useful both to myself, and to future team leads. We'll see, eh?

- Eric

Tuesday, September 8, 2009

Real life vs. movie

So there is apparently a EMP bomb doomsday conference going on today, presumably to generate some leads for hardening financial institution computers. Must be a shortage of ways to spend taxpayer money, presumably.

In reality, terrorists do not need EMP devices to disable internet-connected devices. All they need to do is rent a botnet to do a DDOS. Furthermore, the biggest risk to the integrity of financial data and computing systems is not the explosion of a nuclear device, which is what would be required to create an EMP pulse capable of wiping out data (and I might add that if you just got vaporized in a nuclear explosion you're unlikely to care anymore anyhow). Rather, the biggest risk is insider fraud and technological meltdowns caused by inadequate internal controls. Remember, Countrywide and its fellow sub-prime lenders and securitizers such as Goldman Sachs did more damage to the nation's financial system than any nuke has ever done.

In short, as usual, people are focusing on flash, not meat. Computer security and financial integrity in general is not, in reality, a glamour product. It's a multi-level system of firewalls, security software, policies, procedures, regulatory activities, and checks and balances intended to detect and remedy any intrusion, extrusion, or fraud before it can culminate in actual data loss or financial loss. Or as Bruce Schneier is fond of saying, security is a process, not a product. It's a bunch of hard work, and too often not done correctly, but I suppose talking about the hypothetical effects of fictional nuclear devices is more exciting for our mindless press to blather about...

-Eric

An introduction

Welcome to my new blog. It may seem odd that, just as other forms of social media are taking off, I'd take up blogging, which is now widely regarded as "old school". But in a sense, I was blogging at my old site long before blogging existed as such, I just quit doing it because it was not fun blogging with /bin/vi as your main blogging tool and I never had time to write any automated blogging tools worth the name between architecting and releasing multiple products for multiple companies.

This blog is primarily for my technology ramblings. I'm not going to talk about politics, what I did over the Labor Day weekend, or anything like that. I will point out interesting new technologies, talk about development methodologies, and talk about failed projects and un-failed projects and what's the difference between good management and poor management. I released my first product in 1988, over twenty years ago, so I have a little bit of experience in the area. I am proud to say that I have never been part of a failed project (i.e., one that failed to ship), though in more than one case that was despite incompetent management that resulted in a development process much more protracted than necessary. I've had good managers, I've had bad managers, and I've managed multiple teams of my own. Sharing some of this expertise with younger engineers and freshly minted managers might seem a bit arrogant, but I have the battle scars -- and delivered product -- to justify it.

How often will posts appear here? Good question. I'm going to aim for at least twice a week. Like most good engineers I have no shortage of opinions about what makes a good piece of software or what is the difference between a well-managed company and a poorly-managed one, and as a serial startup guy and general geek I love talking about virtualization vs. containerization and cloud computing and things of that sort, even if it just to remind folks that only the terminology is new, there are no fundamentally new concepts at work here. So I don't have a shortage of material. What I do have is a shortage of time -- that whole startup thing, remember? So we'll see. In the meantime... Virtualization is successful because operating systems are weak. Read. Think. Discuss. Enjoy :).

- Eric